- details: Agents are Controllers with a common Generation API with enhanced memory and tooling.
-
----
-# {{ $frontmatter.title }}
-
-Active Agent provides a structured approach to building AI-powered applications through Agent Oriented Programming. Designing applications using agents allows developers to create modular, reusable components that can be easily integrated into existing systems. This approach promotes code reusability, maintainability, and scalability, making it easier to build complex AI-driven applications with the Object Oriented Ruby code you already use today.
-
-## MVC Architecture
-Active Agent is built around a few core components that work together to provide a seamless experience for developers and users. Using familiar concepts from Rails that made it the MVC framework of choice for web applications, Active Agent extends these concepts to the world of AI and generative models. At the core of Active Agent is Action Prompt, which provides a structured way to manage prompts, actions, and responses. The framework is designed to be modular and extensible, allowing developers to create custom agents with actions that render prompts to generate responses.
-
-
-
-## Model: Prompt Context
-Action Prompt allows Agent classes to define actions the return prompt context's with formatted messages.
-
-The Prompt object is the core data model that contains the runtime context messages, actions (tools), and configuration for the prompt. It is responsible for managing the contextual history and providing the necessary information for prompt and response cycles.
-
-
- details: Agents are Controllers with a common Generation API with enhanced memory and tooling.
-
----
-# Framework Overview
-
-Active Agent provides a structured approach to building AI-powered applications through Agent Oriented Programming. Designing applications using agents allows developers to create modular, reusable components that can be easily integrated into existing systems. This approach promotes code reusability, maintainability, and scalability, making it easier to build complex AI-driven applications with the Object Oriented Ruby code you already use today.
-
-Agent instructions are action views rendered as system messages to
-the agent's context `prompt.messages`.
-
-Actions render user/assistant/tool messages using the views associated with the agent based on Action View naming conventions. Tools can be defined by providing json action views, but actions could also just be formatted prompt message templates or
- assistant response templates.
-
-## Core Concepts
-Active Agent is built around a few core concepts that form the foundation of the framework. These concepts are designed to be familiar to developers who have experience with Ruby on Rails, making it easy to get started with Active Agent.
-
-- **Agents** are abstract controllers that handles AI interactions using a specified generation provider. Agents are more that lifeless objects, they are the controllers of your application's AI features. They are responsible for managing the flow of data and interactions between different components of your application. Active Agent provides a set of tools and conventions to help you build agents that are easy to understand, maintain, and extend.
-- **Actions**: are the agent's interface to perform tasks and render Action Views for templated agent prompts and user interfaces. They provide a way to define reusable components (or leverage your existing view templates) that can be easily integrated into different agents. Actions can be used to retrieve datacreate custom views, handle user input, and manage the flow of data between different components of your application.
-- **Prompts** are the core data model that contains the runtime context, messages, actions (tools), and configuration for the prompt.
-- **Views**: are responsible for presenting the formatted message content used in a prompt's context and its associated data to the agent and user.
-- **Generation Provider**: A generation provider is the agent's backend interface to AI services that enable agents to generate content, embeddings, and perform actions through tool calls.
-
-
-### Queued Generation Jobs
-Active Agent provides a built-in job queue for generating content asynchronously. This allows for efficient processing of requests and ensures that the application remains responsive even during heavy load. Scale it just like you would with any other Rails application with Active Jobs.
-
-### Generation Providers are the AI Service Backends
-Generation providers are the backend interfaces to AI services that enable agents to generate content, embeddings, and request actions. They provide a common interface for different AI providers, allowing developers to easily switch between them without changing the core application logic. Using `generate_with` method, you can easily switch between different providers, configurations, instructions, models, and other parameters to optimize the agentic processes.
-
-
diff --git a/docs/docs/framework/action-prompt.md b/docs/docs/framework/action-prompt.md
deleted file mode 100644
index 4c583fdc..00000000
--- a/docs/docs/framework/action-prompt.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Action Prompt
----
-# {{ $frontmatter.title }}
-
-Action Prompt provides a structured way to manage prompt contexts and handle responses with callbacks as well as perform actions that render messages using Action View templates.
-
-ActiveAgent::Base implements a `prompt_context` action which by default will render a prompt context object with message text content, that can contain messages, actions, and params. This action doesn't need a template as long as a Message or String are passed into the `message: params[:message]`
-
-<<< @/../test/agents/application_agent_test.rb#application_agent_prompt_context_message_generation {ruby:line-numbers}
-
-Similarly to Action Mailers that render mail messages that are delivered through configured delivery methods, Action Prompt integrates with Generation Providers through the generation module. This allows for dynamic content generation and the ability to use Rails helpers and partials within the prompt templates as well as rendering content from performed actions. Empowering developers with a powerful way to create interactive and engaging user experiences.
-
-## Prompt-generation Request-Response Cycle
-The prompt-generation cycle is similar to the request-response cycle of Action Controller and is at the core of the Active Agent framework. It involves the following steps:
-1. **Prompt Context**: The Prompt object is created with the necessary context, including messages, actions, and parameters.
-2. **Generation Request**: The agent sends a request to the generation provider with the prompt context, including the messages and actions.
-3. **Generation Response**: The generation provider processes the request and returns a response, which is then passed back to the agent.
-4. **Response Handling**: The agent processes the response which can be sent back to the user or used for further processing.
-5. **Action Execution**: If the response includes actions, the agent executes them and updates the context accordingly.
-6. **Updated Context**: The context is updated with the new messages, actions, and parameters, and the cycle continues.
-
-## Prompt Context
-Action Prompt renders prompt context objects that represent the contextual data and runtime parameters for the generation process. Prompt context objects contain messages, actions, and params that are passed in the request to the agent's generation provider. The context object is responsible for managing the contextual history and providing the necessary information for prompt and response cycles.
diff --git a/docs/docs/framework/active-agent.md b/docs/docs/framework/active-agent.md
deleted file mode 100644
index 17b697b0..00000000
--- a/docs/docs/framework/active-agent.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-title: Active Agent
-model:
- - title: Context (Model)
- link: /docs/action-prompt/prompts
- icon: 📝
- details: Prompt Context is the core data model that contains the runtime context, messages, variables, and configuration for the prompt.
-view:
- - title: Prompt (View)
- link: /docs/framework/action-prompt
- icon: 🖼️
- details: Action View templates are responsible for rendering the prompts to agents and UI to users.
-controller:
- - title: Agents (Controller)
- link: /docs/framework/active-agent
- icon:
- details: Agents are Controllers with a common Generation API with enhanced memory and tooling.
-
----
-# Active Agent
-
-Agents are Controllers that act as the core of the Active Agent framework. Active Agent manages AI-driven interactions, prompts, actions, and generative responses using Action Prompt. Action Prompt is a structured way to manage prompts, render formatted message content through action views, and handle responses.
-
-Active Agent implements base actions that can be used by any agent that inherits from `ActiveAgent::Base`.
-
-
-The primary action is the `prompt_context` which provides a common interface to render prompts with context messages.
-
-::: code-group
-<<< @/../test/dummy/app/agents/translation_agent.rb{ruby:line-numbers} [translation_agent.rb]
-<<< @/../test/dummy/app/views/translation_agent/translate.json.jbuilder{ruby:line-numbers} [translate.json.jbuilder]
-<<< @/../test/dummy/app/views/translation_agent/translate.text.erb{erb:line-numbers} [translate.text.erb]
-:::
-
-## Key Features
-- **Prompt management**: Handle prompt-generation request-response cycles with actions that render templated prompts with messages, context, and params.
-- **[Action methods](/docs/action-prompt/actions)**: Define public methods that become callable tools or functions for the Agent to perform actions that can render prompts to the agent or generative views to the user.
-- **[Queued Generation](/docs/active-agent/queued-generation)**: Manage asynchronous prompt generation and response cycles with Active Job, allowing for efficient processing of requests.
-- **[Callbacks](/docs/active-agent/callbacks)**: Use `before_action`, `after_action`, `before_generation`, `after_generation` callbacks to manage prompt context and handle generated responses.
-- **[Streaming](/docs/active-agent/callbacks#on-stream-callbacks)**: Support real-time updates with the `on_stream` callback to the user interface based on agent interactions.
-
-## Example
-
-::: code-group
-<<< @/../test/dummy/app/agents/travel_agent.rb {ruby} [travel_agent.rb]
-
-<<< @/../test/dummy/app/views/travel_agent/search.html.erb {erb} [search.html.erb]
-
-<<< @/../test/dummy/app/views/travel_agent/book.text.erb {erb} [book.text.erb]
-
-<<< @/../test/dummy/app/views/travel_agent/confirm.text.erb {erb} [confirm.text.erb]
-:::
-
-### Using the Travel Agent
-
-<<< @/../test/agents/travel_agent_test.rb#travel_agent_multi_format{ruby:line-numbers}
-
-::: details Search Response Example
-
-:::
-
-::: details Book Response Example
-
-:::
-
-::: details Confirm Response Example
-
-:::
-
-## Concepts
-### User-Agent interactions
-We're not talking about HTTP User Agents here, but rather the interactions between the user and the AI agent. The user interacts with the agent through a series of prompt context messages and actions that are defined in the agent class. These actions can be used to retrieve data, create custom views, handle user input, and manage the flow of data between different components of your application.
-
-Agents are conceptually similar to a user in the sense that they have a persona, behavior, and state. They can perform actions and have objectives, just like a user. The following table illustrates the similarities between the user and the AI agent:
-| | User | Agent |
-| :------: | ---------: | :----------- |
-| Who | Persona | Archetype |
-| Behavior | Stories | Instructions |
-| State | Scenario | Context |
-| What | Objective | Goal |
-| How | Actions | Tools |
-
-### Agent Oriented Programming (AOP)
-Agent Oriented Programming (AOP) is a programming paradigm that focuses on the use of agents as a primary building block of applications. It allows developers to create modular, reusable components that can be easily integrated into existing systems. AOP promotes code reusability, maintainability, and scalability, making it easier to build complex AI-driven applications.
-| | OOP | AOP |
-| :---------: | ---------------------: | :----------------------------- |
-| unit | Object | Agent |
-| params | message, args, block | prompt, context, tools |
-| computation | method, send, return | perform, generate, response |
-| state | instance variables| prompt context |
-| flow | method calls | prompt and response cycles |
-| constraints | coded logic | written instructions |
diff --git a/docs/docs/framework/concerns.md b/docs/docs/framework/concerns.md
deleted file mode 100644
index f77cb053..00000000
--- a/docs/docs/framework/concerns.md
+++ /dev/null
@@ -1,426 +0,0 @@
-# Using Concerns with ActiveAgent
-
-Concerns provide a powerful way to share functionality, tools, and configurations across multiple agents. This guide shows how to create and use concerns effectively with ActiveAgent.
-
-## Overview
-
-ActiveAgent concerns work just like Rails concerns - they're modules that can be included in agents to share common functionality. This is particularly useful for:
-
-- Sharing tool definitions across agents
-- Providing common actions and prompts
-- Configuring built-in tools (web search, image generation, MCP)
-- Creating reusable agent capabilities
-
-## Creating a Concern
-
-Here's an example of a concern that provides research-related tools:
-
-<<< @/../test/dummy/app/agents/concerns/research_tools.rb#1-126{ruby:line-numbers}
-
-## Using Concerns in Agents
-
-Include the concern in your agent to gain its functionality:
-
-<<< @/../test/dummy/app/agents/research_agent.rb#1-70{ruby:line-numbers}
-
-## Key Features
-
-### Class-Level Configuration
-
-Concerns can provide configuration methods that agents can use:
-
-```ruby
-module ResearchTools
- extend ActiveSupport::Concern
-
- included do
- class_attribute :research_tools_config, default: {}
- end
-
- class_methods do
- def configure_research_tools(**options)
- self.research_tools_config = research_tools_config.merge(options)
- end
- end
-end
-
-class MyResearchAgent < ApplicationAgent
- include ResearchTools
-
- configure_research_tools(
- enable_web_search: true,
- mcp_servers: ["arxiv", "github"],
- default_search_context: "high"
- )
-end
-```
-
-### Actions as Tools
-
-Public methods in concerns become available as tools for the AI:
-
-```ruby
-module DataTools
- extend ActiveSupport::Concern
-
- def calculate_statistics
- data = params[:data]
- # This becomes a tool the AI can call
- {
- mean: data.sum.to_f / data.size,
- median: data.sort[data.size / 2],
- mode: data.group_by(&:itself).values.max_by(&:size).first
- }
- end
-
- def fetch_external_data
- endpoint = params[:endpoint]
- HTTParty.get(endpoint)
- end
-end
-```
-
-### Built-in Tools Configuration
-
-Concerns can configure OpenAI's built-in tools dynamically:
-
-```ruby
-module WebSearchable
- extend ActiveSupport::Concern
-
- def search_web
- query = params[:query]
- context_size = params[:context_size] || "medium"
-
- prompt(
- message: query,
- tools: [
- {
- type: "web_search_preview",
- search_context_size: context_size
- }
- ]
- )
- end
-end
-```
-
-### MCP Integration
-
-Configure MCP (Model Context Protocol) servers in concerns:
-
-```ruby
-module MCPConnectable
- extend ActiveSupport::Concern
-
- def connect_to_services
- services = params[:services] || []
-
- mcp_tools = services.map do |service|
- case service
- when "dropbox"
- {
- type: "mcp",
- connector_id: "connector_dropbox"
- }
- when "github"
- {
- type: "mcp",
- server_url: "https://api.githubcopilot.com/mcp/"
- }
- end
- end
-
- prompt(
- message: "Connect to requested services",
- tools: mcp_tools
- )
- end
-end
-```
-
-## Multiple Concerns
-
-Agents can include multiple concerns to combine capabilities:
-
-```ruby
-class PowerfulAgent < ApplicationAgent
- include ResearchTools
- include WebSearchable
- include DataTools
- include MCPConnectable
-
- generate_with :openai, model: "gpt-4o"
-
- # This agent now has all the tools from all concerns
- def analyze_and_report
- topic = params[:topic]
-
- prompt(
- message: "Analyze #{topic} using all available tools",
- # Tools from all concerns are available
- )
- end
-end
-```
-
-## Testing Concerns
-
-Test concerns to ensure they work correctly:
-
-```ruby
-class ResearchToolsTest < ActiveSupport::TestCase
- setup do
- @agent_class = Class.new(ApplicationAgent) do
- include ResearchTools
- generate_with :openai, model: "gpt-4o"
- end
- @agent = @agent_class.new
- end
-
- test "concern adds expected actions" do
- expected_actions = [
- "search_academic_papers",
- "analyze_research_data",
- "generate_research_visualization"
- ]
-
- agent_actions = @agent.action_methods
- expected_actions.each do |action|
- assert_includes agent_actions, action
- end
- end
-
- test "concern configuration works" do
- @agent_class.configure_research_tools(
- enable_web_search: true,
- mcp_servers: ["arxiv"]
- )
-
- assert @agent_class.research_tools_config[:enable_web_search]
- assert_equal ["arxiv"], @agent_class.research_tools_config[:mcp_servers]
- end
-end
-```
-
-## Best Practices
-
-### 1. Single Responsibility
-
-Each concern should focus on a specific capability:
-
-```ruby
-# Good - focused on research
-module ResearchTools
- # Research-specific tools
-end
-
-# Good - focused on data processing
-module DataProcessing
- # Data processing tools
-end
-
-# Bad - too broad
-module AllTools
- # Everything mixed together
-end
-```
-
-### 2. Configurable Behavior
-
-Make concerns configurable for flexibility:
-
-```ruby
-module Translatable
- extend ActiveSupport::Concern
-
- included do
- class_attribute :translation_config, default: {}
- end
-
- class_methods do
- def configure_translation(target_languages: [], default_language: "en")
- self.translation_config = {
- target_languages: target_languages,
- default_language: default_language
- }
- end
- end
-
- def translate
- text = params[:text]
- target = params[:target] || translation_config[:default_language]
- # Translation logic
- end
-end
-```
-
-### 3. Document Tool Schemas
-
-Include JSON views for tool schemas:
-
-```ruby
-# app/views/research_tools/search_academic_papers.json.jbuilder
-json.type "function"
-json.function do
- json.name action_name
- json.description "Search for academic papers"
- json.parameters do
- json.type "object"
- json.properties do
- json.query do
- json.type "string"
- json.description "Search query"
- end
- json.year_from do
- json.type "integer"
- json.description "Start year for publication date filter"
- end
- end
- json.required ["query"]
- end
-end
-```
-
-### 4. Handle API Differences
-
-Consider different API capabilities:
-
-```ruby
-module AdaptiveTools
- extend ActiveSupport::Concern
-
- private
-
- def responses_api?
- # Check if using Responses API
- options[:use_responses_api] ||
- ["gpt-5", "gpt-4.1"].include?(options[:model])
- end
-
- def configure_tools
- if responses_api?
- # Use built-in tools
- [
- {type: "web_search_preview"},
- {type: "image_generation"}
- ]
- else
- # Use function calling
- []
- end
- end
-end
-```
-
-## Real-World Examples
-
-### Content Generation Concern
-
-```ruby
-module ContentGeneration
- extend ActiveSupport::Concern
-
- def generate_blog_post
- topic = params[:topic]
- style = params[:style] || "informative"
-
- prompt(
- message: "Write a #{style} blog post about #{topic}",
- instructions: "Create engaging, SEO-friendly content"
- )
- end
-
- def generate_social_media
- content = params[:content]
- platforms = params[:platforms] || ["twitter"]
-
- prompt(
- message: "Create social media posts for: #{platforms.join(', ')}",
- context: content
- )
- end
-
- def optimize_seo
- content = params[:content]
- keywords = params[:keywords]
-
- prompt(
- message: "Optimize this content for SEO",
- context: {content: content, keywords: keywords}
- )
- end
-end
-```
-
-### Data Analysis Concern
-
-```ruby
-module DataAnalysis
- extend ActiveSupport::Concern
-
- included do
- class_attribute :analysis_config, default: {
- output_format: :json,
- include_visualizations: false
- }
- end
-
- def analyze_trends
- data = params[:data]
- timeframe = params[:timeframe]
-
- prompt(
- message: "Analyze trends in this data over #{timeframe}",
- content_type: analysis_config[:output_format],
- tools: visualization_tools
- )
- end
-
- private
-
- def visualization_tools
- return [] unless analysis_config[:include_visualizations]
-
- [{
- type: "image_generation",
- size: "1024x768",
- quality: "high"
- }]
- end
-end
-```
-
-## Integration with Rails
-
-Concerns work seamlessly with Rails conventions:
-
-```ruby
-# app/agents/concerns/authenticatable.rb
-module Authenticatable
- extend ActiveSupport::Concern
-
- included do
- before_action :verify_authentication
- end
-
- private
-
- def verify_authentication
- unless current_user
- raise ActiveAgent::AuthenticationError, "User must be authenticated"
- end
- end
-
- def current_user
- # Access Rails current_user or implement agent-specific auth
- @current_user ||= User.find_by(id: params[:user_id])
- end
-end
-```
-
-## Related Documentation
-
-- [Testing ActiveAgent Applications](/docs/framework/testing)
-- [OpenAI Provider Built-in Tools](/docs/generation-providers/openai-provider#built-in-tools-responses-api)
-- [ActiveAgent Framework](/docs/framework/active-agent)
\ No newline at end of file
diff --git a/docs/docs/framework/embeddings.md b/docs/docs/framework/embeddings.md
deleted file mode 100644
index 47f56ab8..00000000
--- a/docs/docs/framework/embeddings.md
+++ /dev/null
@@ -1,384 +0,0 @@
-# Embeddings
-
-Embeddings are numerical representations of text that capture semantic meaning, enabling similarity searches, clustering, and other vector-based operations. ActiveAgent provides a unified interface for generating embeddings across all supported providers.
-
-## Overview
-
-Embeddings transform text into high-dimensional vectors that represent semantic meaning. Similar texts produce similar vectors, enabling powerful features like:
-
-- **Semantic Search** - Find related content by meaning, not just keywords
-- **Clustering** - Group similar documents automatically
-- **Classification** - Categorize text based on similarity to examples
-- **Recommendation** - Suggest related content based on embeddings
-- **Anomaly Detection** - Identify outliers in text data
-
-## Basic Usage
-
-### Generating Embeddings
-
-Use the `embed_now` method to generate embeddings synchronously:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_sync_generation {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Async Embeddings
-
-Generate embeddings in background jobs:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_async_generation {ruby:line-numbers}
-
-## Embedding Callbacks
-
-Use callbacks to process embeddings before and after generation:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_with_callbacks {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-## Provider Configuration
-
-Each provider supports different embedding models and configurations:
-
-### OpenAI
-
-Configure OpenAI-specific embedding models:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_openai_model_config {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Ollama
-
-Configure Ollama for local embedding generation:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_ollama_provider_test {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Error Handling
-
-ActiveAgent provides proper error handling for connection issues:
-
-<<< @/../test/generation_provider/ollama_provider_test.rb#ollama_provider_embed {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-## Working with Embeddings
-
-### Similarity Search
-
-Find similar documents using cosine similarity:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_similarity_search {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Batch Processing
-
-Process multiple embeddings efficiently:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_batch_processing {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Embedding Dimensions
-
-Different models produce different embedding dimensions:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_dimension_test {ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-## Advanced Patterns
-
-### Caching Embeddings
-
-Cache embeddings to avoid regenerating them:
-
-```ruby
-class CachedEmbeddingAgent < ApplicationAgent
- def get_embedding(text)
- cache_key = "embedding:#{Digest::SHA256.hexdigest(text)}"
-
- Rails.cache.fetch(cache_key, expires_in: 30.days) do
- generation = self.class.with(message: text).prompt_context
- generation.embed_now.message.content
- end
- end
-end
-```
-
-### Multi-Model Embeddings
-
-Use different models for different purposes:
-
-```ruby
-class MultiModelEmbeddingAgent < ApplicationAgent
- def generate_semantic_embedding(text)
- # High-quality semantic embedding
- self.class.generate_with :openai,
- embedding_model: "text-embedding-3-large"
-
- generation = self.class.with(message: text).prompt_context
- generation.embed_now
- end
-
- def generate_fast_embedding(text)
- # Faster, smaller embedding for real-time use
- self.class.generate_with :openai,
- embedding_model: "text-embedding-3-small"
-
- generation = self.class.with(message: text).prompt_context
- generation.embed_now
- end
-end
-```
-
-## Vector Databases
-
-Store and query embeddings using vector databases:
-
-### PostgreSQL with pgvector
-
-```ruby
-class PgVectorAgent < ApplicationAgent
- def store_document(text)
- # Generate embedding
- generation = self.class.with(message: text).prompt_context
- embedding = generation.embed_now.message.content
-
- # Store in PostgreSQL with pgvector
- Document.create!(
- content: text,
- embedding: embedding # pgvector column
- )
- end
-
- def search_similar(query, limit: 10)
- query_embedding = get_embedding(query)
-
- # Use pgvector's <-> operator for cosine distance
- Document
- .order(Arel.sql("embedding <-> '#{query_embedding}'"))
- .limit(limit)
- end
-end
-```
-
-### Pinecone Integration
-
-```ruby
-class PineconeAgent < ApplicationAgent
- def initialize
- super
- @pinecone = Pinecone::Client.new(api_key: ENV['PINECONE_API_KEY'])
- @index = @pinecone.index('documents')
- end
-
- def upsert_document(id, text, metadata = {})
- embedding = get_embedding(text)
-
- @index.upsert(
- vectors: [{
- id: id,
- values: embedding,
- metadata: metadata.merge(text: text)
- }]
- )
- end
-
- def query_similar(text, top_k: 10)
- embedding = get_embedding(text)
-
- @index.query(
- vector: embedding,
- top_k: top_k,
- include_metadata: true
- )
- end
-end
-```
-
-## Testing Embeddings
-
-Test embedding functionality with comprehensive test coverage including callbacks, similarity search, and batch processing as shown in the examples above.
-
-## Performance Optimization
-
-### Batch Processing
-
-Process embeddings in batches for better performance:
-
-```ruby
-class BatchOptimizedAgent < ApplicationAgent
- def process_documents(documents)
- documents.each_slice(100) do |batch|
- Parallel.each(batch, in_threads: 5) do |doc|
- generation = self.class.with(message: doc.content).prompt_context
- doc.embedding = generation.embed_now.message.content
- doc.save!
- end
- end
- end
-end
-```
-
-### Caching Strategy
-
-Implement intelligent caching:
-
-```ruby
-class SmartCacheAgent < ApplicationAgent
- def get_or_generate_embedding(text)
- # Check cache first
- cached = fetch_from_cache(text)
- return cached if cached
-
- # Generate if not cached
- embedding = generate_embedding(text)
-
- # Cache based on text length and importance
- if should_cache?(text)
- cache_embedding(text, embedding)
- end
-
- embedding
- end
-
- private
-
- def should_cache?(text)
- text.length > 100 || text.include?("important")
- end
-end
-```
-
-## Best Practices
-
-1. **Choose the Right Model** - Balance quality, speed, and cost
-2. **Normalize Text** - Preprocess consistently before embedding
-3. **Cache Aggressively** - Embeddings are expensive to generate
-4. **Batch When Possible** - Process multiple texts together
-5. **Monitor Dimensions** - Different models produce different sizes
-6. **Use Callbacks** - Process embeddings consistently
-7. **Handle Failures** - Implement retry logic and fallbacks
-8. **Version Embeddings** - Track which model generated each embedding
-
-## Common Use Cases
-
-### Semantic Search
-
-```ruby
-class SemanticSearchAgent < ApplicationAgent
- def build_search_index(documents)
- documents.each do |doc|
- generation = self.class.with(message: doc.content).prompt_context
- doc.update!(embedding: generation.embed_now.message.content)
- end
- end
-
- def search(query)
- query_embedding = get_embedding(query)
-
- Document
- .select("*, embedding <-> '#{query_embedding}' as distance")
- .order("distance")
- .limit(10)
- end
-end
-```
-
-### Content Recommendations
-
-```ruby
-class RecommendationAgent < ApplicationAgent
- def recommend_similar(article)
- article_embedding = article.embedding || generate_embedding(article.content)
-
- Article
- .where.not(id: article.id)
- .select("*, embedding <-> '#{article_embedding}' as similarity")
- .order("similarity")
- .limit(5)
- end
-end
-```
-
-### Clustering
-
-```ruby
-class ClusteringAgent < ApplicationAgent
- def cluster_documents(documents, num_clusters: 5)
- # Generate embeddings
- embeddings = documents.map do |doc|
- get_embedding(doc.content)
- end
-
- # Use k-means or other clustering algorithm
- clusters = perform_clustering(embeddings, num_clusters)
-
- # Assign documents to clusters
- documents.zip(clusters).each do |doc, cluster_id|
- doc.update!(cluster_id: cluster_id)
- end
- end
-end
-```
-
-## Troubleshooting
-
-### Common Issues
-
-1. **Dimension Mismatch** - Ensure all embeddings use the same model
-2. **Memory Issues** - Large embedding vectors can consume significant RAM
-3. **Rate Limits** - Implement exponential backoff for API limits
-4. **Cost Management** - Monitor embedding API usage and costs
-5. **Connection Errors** - Handle network issues with Ollama and other providers
-
-### Debugging
-
-```ruby
-class DebuggingAgent < ApplicationAgent
- def debug_embedding(text)
- generation = self.class.with(message: text).prompt_context
-
- Rails.logger.info "Generating embedding for: #{text[0..100]}..."
- Rails.logger.info "Provider: #{generation_provider.class.name}"
- Rails.logger.info "Model: #{generation_provider.embedding_model}"
-
- response = generation.embed_now
- embedding = response.message.content
-
- Rails.logger.info "Dimensions: #{embedding.size}"
- Rails.logger.info "Range: [#{embedding.min}, #{embedding.max}]"
- Rails.logger.info "Mean: #{embedding.sum / embedding.size}"
-
- embedding
- end
-end
-```
-
-## Related Documentation
-
-- [Generation Provider Overview](/docs/framework/generation-provider)
-- [OpenAI Provider](/docs/generation-providers/openai-provider)
-- [Ollama Provider](/docs/generation-providers/ollama-provider)
-- [Callbacks](/docs/active-agent/callbacks)
-- [Generation](/docs/active-agent/generation)
\ No newline at end of file
diff --git a/docs/docs/framework/generation-provider.md b/docs/docs/framework/generation-provider.md
deleted file mode 100644
index 18381a56..00000000
--- a/docs/docs/framework/generation-provider.md
+++ /dev/null
@@ -1,205 +0,0 @@
-# Generation Provider
-
-Generation Providers are the backbone of the Active Agent framework, allowing seamless integration with various AI services. They provide a consistent interface for prompting and generating responses, making it easy to switch between different providers without changing the core logic of your application.
-
-## Available Providers
-You can use the following generation providers with Active Agent:
-::: code-group
-
-<<< @/../test/dummy/app/agents/open_ai_agent.rb#snippet{ruby:line-numbers} [OpenAI]
-
-<<< @/../test/dummy/app/agents/anthropic_agent.rb {ruby} [Anthropic]
-
-<<< @/../test/dummy/app/agents/open_router_agent.rb#snippet{ruby:line-numbers} [OpenRouter]
-
-<<< @/../test/dummy/app/agents/ollama_agent.rb#snippet{ruby:line-numbers} [Ollama]
-:::
-
-## Response
-Generation providers handle the request-response cycle for generating responses based on the provided prompts. They process the prompt context, including messages, actions, and parameters, and return the generated response.
-
-### Response Object
-The `ActiveAgent::GenerationProvider::Response` class encapsulates the result of a generation request, providing access to both the processed response and debugging information.
-
-#### Attributes
-
-- **`message`** - The generated response message from the AI provider
-- **`prompt`** - The complete prompt object used for generation, including updated context, messages, and parameters
-- **`raw_response`** - The unprocessed response data from the AI provider, useful for debugging and accessing provider-specific metadata
-
-#### Example Usage
-
-<<< @/../test/generation_provider_examples_test.rb#generation_response_usage{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-The response object ensures you have full visibility into both the input prompt context and the raw provider response, making it easy to debug generation issues or access provider-specific response metadata.
-
-## Provider Configuration
-
-You can configure generation providers with custom settings:
-
-### Model and Temperature Configuration
-
-<<< @/../test/generation_provider_examples_test.rb#anthropic_provider_example{ruby:line-numbers}
-
-<<< @/../test/generation_provider_examples_test.rb#google_provider_example{ruby:line-numbers}
-
-### Custom Host Configuration
-
-For Azure OpenAI or other custom endpoints:
-
-<<< @/../test/generation_provider_examples_test.rb#custom_host_configuration{ruby:line-numbers}
-
-## Configuration Precedence
-
-ActiveAgent follows a clear hierarchy for configuration parameters, ensuring that you have fine-grained control over your AI generation settings. Parameters can be configured at multiple levels, with higher-priority settings overriding lower-priority ones.
-
-### Precedence Order (Highest to Lowest)
-
-1. **Runtime Options** - Parameters passed directly to the `prompt` method
-2. **Agent Options** - Parameters defined in `generate_with` at the agent class level
-3. **Global Configuration** - Parameters in `config/active_agent.yml`
-
-This hierarchy allows you to:
-- Set sensible defaults globally
-- Override them for specific agents
-- Make runtime adjustments for individual requests
-
-### Example: Configuration Precedence in Action
-
-<<< @/../test/agents/configuration_precedence_test.rb#test_configuration_precedence{ruby:line-numbers}
-
-### Data Collection Precedence Example
-
-The `data_collection` parameter for OpenRouter follows the same precedence rules:
-
-<<< @/../test/agents/configuration_precedence_test.rb#test_data_collection_precedence{ruby:line-numbers}
-
-### Key Principles
-
-#### 1. Runtime Always Wins
-Runtime options in the `prompt` method override all other configurations. See the test demonstrating this behavior:
-
-<<< @/../test/agents/configuration_precedence_test.rb#runtime_options_override{ruby:line-numbers}
-
-#### 2. Nil Values Don't Override
-Nil values passed at runtime don't override existing configurations:
-
-<<< @/../test/agents/configuration_precedence_test.rb#nil_values_dont_override{ruby:line-numbers}
-
-#### 3. Agent Configuration Overrides Global
-Agent-level settings take precedence over global configuration files:
-
-<<< @/../test/agents/configuration_precedence_test.rb#agent_overrides_config{ruby:line-numbers}
-
-### Supported Runtime Options
-
-The following options can be overridden at runtime:
-
-- `:model` - The AI model to use
-- `:temperature` - Creativity/randomness (0.0-1.0)
-- `:max_tokens` - Maximum response length
-- `:stream` - Enable streaming responses
-- `:top_p` - Nucleus sampling parameter
-- `:frequency_penalty` - Reduce repetition
-- `:presence_penalty` - Encourage topic diversity
-- `:response_format` - Structured output format
-- `:seed` - For reproducible outputs
-- `:stop` - Stop sequences
-- `:tools_choice` - Tool selection strategy
-- `:data_collection` - Privacy settings (OpenRouter)
-- `:require_parameters` - Provider parameter validation (OpenRouter)
-
-### Best Practices
-
-1. **Use Global Config for Defaults**: Set organization-wide defaults in `config/active_agent.yml`
-2. **Agent-Level for Specific Needs**: Override in `generate_with` for agent-specific requirements
-3. **Runtime for Dynamic Adjustments**: Use runtime options for user preferences or conditional logic
-
-For a complete example showing all three levels working together, see:
-
-<<< @/../test/agents/configuration_precedence_test.rb#test_configuration_precedence{ruby:line-numbers}
-
-## Embeddings Support
-
-Generation providers support creating text embeddings for semantic search, clustering, and similarity matching. Embeddings transform text into numerical vectors that capture semantic meaning.
-
-### Generating Embeddings Synchronously
-
-Use `embed_now` to generate embeddings immediately:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_sync_generation{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Asynchronous Embedding Generation
-
-Use `embed_later` for background processing of embeddings:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_async_generation{ruby:line-numbers}
-
-### Embedding Callbacks
-
-Process embeddings with before and after callbacks:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_with_callbacks{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Similarity Search
-
-Use embeddings to find semantically similar content:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_similarity_search{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-### Provider-Specific Embedding Models
-
-Different providers offer various embedding models:
-
-- **OpenAI**: `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002`
-- **Ollama**: `nomic-embed-text`, `mxbai-embed-large`, `all-minilm`
-- **Anthropic**: Does not natively support embeddings (use a dedicated embedding provider)
-
-### Configuration
-
-Configure embedding models in your agent:
-
-```ruby
-class EmbeddingAgent < ApplicationAgent
- generate_with :openai,
- model: "gpt-4", # For text generation
- embedding_model: "text-embedding-3-large" # For embeddings
-end
-```
-
-Or in your configuration file:
-
-```yaml
-development:
- openai:
- model: gpt-4
- embedding_model: text-embedding-3-large
- dimensions: 256 # Optional: reduce embedding dimensions
-```
-
-For more details on embeddings, see the [Embeddings Guide](/docs/framework/embeddings).
-
-## Provider-Specific Documentation
-
-For detailed documentation on specific providers and their features:
-
-- [OpenAI Provider](/docs/generation-providers/openai-provider) - GPT-4, GPT-3.5, function calling, vision, and Azure OpenAI support
-- [Anthropic Provider](/docs/generation-providers/anthropic-provider) - Claude 3.5 and Claude 3 models with extended context windows
-- [Ollama Provider](/docs/generation-providers/ollama-provider) - Local LLM inference for privacy-sensitive applications
-- [OpenRouter Provider](/docs/generation-providers/open-router-provider) - Multi-model routing with fallbacks, PDF processing, and vision support
-
diff --git a/docs/docs/framework/rails-integration.md b/docs/docs/framework/rails-integration.md
deleted file mode 100644
index bfefb5b3..00000000
--- a/docs/docs/framework/rails-integration.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Rails Integration
----
-# {{ $frontmatter.title }}
-Active Agent integrates seamlessly with Rails, leveraging its powerful features to enhance AI-driven applications. This guide covers the key aspects of integrating Active Agent into your Rails application.
-
-## Active Agent compresses the complexity of AI interactions
-Active Agent keeps things simple, no multi-step workflows or unnecessary complexity. It integrates directly into your Rails app with clear separation of concerns, making AI features easy to implement and maintain. With less than 10 lines of code, you can ship an AI feature.
-
-## User facing interactions
-Active Agent is designed to work seamlessly with Rails applications. It can be easily integrated into your existing Rails app without any additional configuration.
-
-You can pass messages to the agent from Action Controller, and the agent render a prompt context, generate a response using the configured generation provider, then handle the response using its own `after_generation`.
-
-```ruby
-class MessagesController < ApplicationController
- def create
- # Use the class method with() to pass parameters, then call the action
- generation = TravelAgent.with(message: params[:message]).prompt_context.generate_later
-
- # The generation object tracks the async job
- render json: { job_id: generation.job_id }
- end
-
- def show
- # Check status of a generation
- generation = ActiveAgent::Generation.find(params[:id])
-
- if generation.finished?
- render json: { response: generation.response.message.content }
- else
- render json: { status: "processing" }
- end
- end
-end
-```
-
-## Agent facing interactions
-Your Rails app probably already has feature sets for business logic abstracted into models, services, and jobs, so you can leverage these to initiate agent interactions. Whether you want to process a new record to use AI to extract structured data, or you want AI to interact with third-party APIs, or interact base on the current state of your application, you can use Active Agent to handle these interactions.
-
-```ruby
-class ApplicationAgent < ActiveAgent::Base
- generate_with :openai
-end
-```
\ No newline at end of file
diff --git a/docs/docs/generation-providers/anthropic-provider.md b/docs/docs/generation-providers/anthropic-provider.md
deleted file mode 100644
index 4d81e3af..00000000
--- a/docs/docs/generation-providers/anthropic-provider.md
+++ /dev/null
@@ -1,432 +0,0 @@
-# Anthropic Provider
-
-The Anthropic provider enables integration with Claude models including Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku. It offers advanced reasoning capabilities, extended context windows, and strong performance on complex tasks.
-
-## Configuration
-
-### Basic Setup
-
-Configure Anthropic in your agent:
-
-<<< @/../test/dummy/app/agents/anthropic_agent.rb{ruby:line-numbers}
-
-### Configuration File
-
-Set up Anthropic credentials in `config/active_agent.yml`:
-
-::: code-group
-
-<<< @/../test/dummy/config/active_agent.yml#anthropic_anchor{yaml:line-numbers}
-
-<<< @/../test/dummy/config/active_agent.yml#anthropic_dev_config{yaml:line-numbers}
-
-:::
-
-### Environment Variables
-
-Alternatively, use environment variables:
-
-```bash
-ANTHROPIC_API_KEY=your-api-key
-ANTHROPIC_VERSION=2023-06-01 # Optional API version
-```
-
-## Supported Models
-
-### Claude 3.5 Family
-- **claude-3-5-sonnet-latest** - Most intelligent model with best performance
-- **claude-3-5-sonnet-20241022** - Specific version for reproducibility
-
-### Claude 3 Family
-- **claude-3-opus-latest** - Most capable Claude 3 model
-- **claude-3-sonnet-20240229** - Balanced performance and cost
-- **claude-3-haiku-20240307** - Fastest and most cost-effective
-
-## Features
-
-### Extended Context Window
-
-Claude models support up to 200K tokens of context:
-
-```ruby
-class DocumentAnalyzer < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-5-sonnet-latest",
- max_tokens: 4096
-
- def analyze_document
- @document = params[:document] # Can be very long
- prompt instructions: "Analyze this document thoroughly"
- end
-end
-```
-
-### System Messages
-
-Anthropic models excel at following system instructions:
-
-```ruby
-class SpecializedAgent < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-5-sonnet-latest",
- system: "You are an expert Ruby developer specializing in Rails applications."
-
- def review_code
- @code = params[:code]
- prompt
- end
-end
-```
-
-### Tool Use
-
-Claude supports function calling through tool use:
-
-```ruby
-class ToolAgent < ApplicationAgent
- generate_with :anthropic, model: "claude-3-5-sonnet-latest"
-
- def process_request
- @request = params[:request]
- prompt # Includes all public methods as tools
- end
-
- def search_database(query:, table:)
- # Tool that Claude can call
- ActiveRecord::Base.connection.execute(
- "SELECT * FROM #{table} WHERE #{query}"
- )
- end
-
- def calculate(expression:)
- # Another available tool
- eval(expression) # In production, use a safe math parser
- end
-end
-```
-
-### Streaming Responses
-
-Enable streaming for real-time output:
-
-```ruby
-class StreamingClaudeAgent < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-5-sonnet-latest",
- stream: true
-
- on_message_chunk do |chunk|
- # Handle streaming chunks
- ActionCable.server.broadcast("chat_#{params[:session_id]}", chunk)
- end
-
- def chat
- prompt(message: params[:message])
- end
-end
-```
-
-### Structured Output
-
-While Anthropic doesn't provide native structured output like OpenAI's JSON mode, Claude models excel at following JSON format instructions and producing well-structured outputs.
-
-#### Approach
-
-Claude's strong instruction-following capabilities make it reliable for JSON generation:
-
-```ruby
-class AnthropicStructuredAgent < ApplicationAgent
- generate_with :anthropic, model: "claude-3-5-sonnet-latest"
-
- def extract_data
- @text = params[:text]
- @schema = params[:schema]
-
- prompt(
- instructions: build_json_instructions,
- message: @text
- )
- end
-
- private
-
- def build_json_instructions
- <<~INSTRUCTIONS
- You must respond with valid JSON that conforms to this schema:
- #{@schema.to_json}
-
- Ensure your response:
- - Is valid JSON without any markdown formatting
- - Includes all required fields
- - Uses the exact property names from the schema
- - Contains appropriate data types for each field
- INSTRUCTIONS
- end
-end
-```
-
-#### With Schema Generator
-
-Use ActiveAgent's schema generator with Claude:
-
-```ruby
-# Define your model
-class ExtractedData
- include ActiveModel::Model
- include ActiveAgent::SchemaGenerator
-
- attribute :name, :string
- attribute :email, :string
- attribute :age, :integer
-
- validates :name, presence: true
- validates :email, format: { with: URI::MailTo::EMAIL_REGEXP }
-end
-
-# Generate and use the schema
-schema = ExtractedData.to_json_schema
-response = AnthropicAgent.with(
- text: "John Doe, 30 years old, john@example.com",
- schema: schema
-).extract_data.generate_now
-
-# Parse the JSON response
-data = JSON.parse(response.message.content)
-```
-
-#### Best Practices for Structured Output with Claude
-
-1. **Clear Instructions**: Provide explicit JSON formatting instructions in the system message
-2. **Schema in Prompt**: Include the schema definition directly in the prompt
-3. **Example Output**: Consider providing an example of the expected JSON format
-4. **Validation**: Always validate the returned JSON against your schema
-5. **Error Handling**: Implement fallback logic for malformed responses
-
-#### Example with Validation
-
-```ruby
-class ValidatedAnthropicAgent < ApplicationAgent
- generate_with :anthropic, model: "claude-3-5-sonnet-latest"
-
- def extract_with_validation
- response = prompt(
- instructions: json_instructions,
- message: params[:text]
- )
-
- # Validate and parse response
- begin
- json_data = JSON.parse(response.message.content)
- validate_against_schema(json_data)
- json_data
- rescue JSON::ParserError => e
- handle_invalid_json(e)
- end
- end
-
- private
-
- def validate_against_schema(data)
- # Implement schema validation logic
- JSON::Validator.validate!(schema, data)
- end
-end
-```
-
-#### Advantages with Claude
-
-- **Reliability**: Claude consistently follows formatting instructions
-- **Flexibility**: Can handle complex nested schemas
-- **Context**: Excellent at understanding context for accurate extraction
-- **Reasoning**: Can explain extraction decisions when needed
-
-See the [Structured Output guide](/docs/active-agent/structured-output) for more examples and patterns.
-
-### Vision Capabilities
-
-Claude models support image analysis:
-
-```ruby
-class VisionAgent < ApplicationAgent
- generate_with :anthropic, model: "claude-3-5-sonnet-latest"
-
- def analyze_image
- @image_path = params[:image_path]
- @image_base64 = Base64.encode64(File.read(@image_path))
-
- prompt content_type: :text
- end
-end
-
-# In your view (analyze_image.text.erb):
-# Analyze this image: [base64 image data would be included]
-```
-
-## Provider-Specific Parameters
-
-### Model Parameters
-
-- **`model`** - Model identifier (e.g., "claude-3-5-sonnet-latest")
-- **`max_tokens`** - Maximum tokens to generate (required)
-- **`temperature`** - Controls randomness (0.0 to 1.0)
-- **`top_p`** - Nucleus sampling parameter
-- **`top_k`** - Top-k sampling parameter
-- **`stop_sequences`** - Array of sequences to stop generation
-
-### Metadata
-
-- **`metadata`** - Custom metadata for request tracking
- ```ruby
- generate_with :anthropic,
- metadata: {
- user_id: -> { Current.user&.id },
- request_id: -> { SecureRandom.uuid }
- }
- ```
-
-### Safety Settings
-
-- **`anthropic_version`** - API version for consistent behavior
-- **`anthropic_beta`** - Enable beta features
-
-## Error Handling
-
-Handle Anthropic-specific errors:
-
-```ruby
-class ResilientAgent < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-5-sonnet-latest",
- max_retries: 3
-
- rescue_from Anthropic::RateLimitError do |error|
- Rails.logger.warn "Rate limited: #{error.message}"
- sleep(error.retry_after || 60)
- retry
- end
-
- rescue_from Anthropic::APIError do |error|
- Rails.logger.error "Anthropic error: #{error.message}"
- fallback_to_cached_response
- end
-end
-```
-
-## Testing
-
-Example test setup with Anthropic:
-
-```ruby
-class AnthropicAgentTest < ActiveSupport::TestCase
- test "generates response with Claude" do
- VCR.use_cassette("anthropic_claude_response") do
- response = AnthropicAgent.with(
- message: "Explain Ruby blocks"
- ).prompt_context.generate_now
-
- assert_not_nil response.message.content
- assert response.message.content.include?("block")
-
- doc_example_output(response)
- end
- end
-end
-```
-
-## Cost Optimization
-
-### Model Selection
-
-- Use Claude 3 Haiku for simple tasks
-- Use Claude 3.5 Sonnet for complex reasoning
-- Reserve Claude 3 Opus for the most demanding tasks
-
-### Token Management
-
-```ruby
-class EfficientClaudeAgent < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-haiku-20240307",
- max_tokens: 500 # Limit output length
-
- def quick_summary
- @content = params[:content]
-
- # Truncate input if needed
- if @content.length > 10_000
- @content = @content.truncate(10_000, omission: "... [truncated]")
- end
-
- prompt instructions: "Provide a brief summary"
- end
-end
-```
-
-### Response Caching
-
-```ruby
-class CachedClaudeAgent < ApplicationAgent
- generate_with :anthropic, model: "claude-3-5-sonnet-latest"
-
- def answer_question
- question = params[:question]
-
- cache_key = "claude_answer/#{Digest::SHA256.hexdigest(question)}"
-
- Rails.cache.fetch(cache_key, expires_in: 1.hour) do
- prompt(message: question).generate_now
- end
- end
-end
-```
-
-## Best Practices
-
-1. **Always specify max_tokens** - Required parameter for Anthropic
-2. **Use appropriate models** - Balance cost and capability
-3. **Leverage system messages** - Claude follows them very well
-4. **Handle rate limits gracefully** - Implement exponential backoff
-5. **Monitor token usage** - Track costs and optimize
-6. **Use caching strategically** - Reduce API calls for repeated queries
-7. **Validate outputs** - Especially for critical applications
-
-## Anthropic-Specific Considerations
-
-### Constitutional AI
-
-Claude is trained with Constitutional AI, making it particularly good at:
-- Following ethical guidelines
-- Refusing harmful requests
-- Providing balanced perspectives
-- Being helpful, harmless, and honest
-
-### Context Window Management
-
-```ruby
-class LongContextAgent < ApplicationAgent
- generate_with :anthropic,
- model: "claude-3-5-sonnet-latest",
- max_tokens: 4096
-
- def analyze_codebase
- # Claude can handle very large contexts effectively
- @files = load_all_project_files # Up to 200K tokens
-
- prompt instructions: "Analyze this entire codebase"
- end
-
- private
-
- def load_all_project_files
- Dir.glob("app/**/*.rb").map do |file|
- "// File: #{file}\n#{File.read(file)}"
- end.join("\n\n")
- end
-end
-```
-
-## Related Documentation
-
-- [Generation Provider Overview](/docs/framework/generation-provider)
-- [Configuration Guide](/docs/getting-started#configuration)
-- [Anthropic API Documentation](https://docs.anthropic.com/claude/reference)
\ No newline at end of file
diff --git a/docs/docs/generation-providers/ollama-provider.md b/docs/docs/generation-providers/ollama-provider.md
deleted file mode 100644
index 1fcdc20f..00000000
--- a/docs/docs/generation-providers/ollama-provider.md
+++ /dev/null
@@ -1,564 +0,0 @@
-# Ollama Provider
-
-The Ollama provider enables local LLM inference using the Ollama platform. Run models like Llama 3, Mistral, and Gemma locally without sending data to external APIs, perfect for privacy-sensitive applications and development.
-
-## Configuration
-
-### Basic Setup
-
-Configure Ollama in your agent:
-
-<<< @/../test/dummy/app/agents/ollama_agent.rb#snippet{ruby:line-numbers}
-
-### Configuration File
-
-Set up Ollama in `config/active_agent.yml`:
-
-::: code-group
-
-<<< @/../test/dummy/config/active_agent.yml#ollama_anchor{yaml:line-numbers}
-
-<<< @/../test/dummy/config/active_agent.yml#ollama_dev_config{yaml:line-numbers}
-
-:::
-
-### Environment Variables
-
-Configure via environment:
-
-```bash
-OLLAMA_HOST=http://localhost:11434
-OLLAMA_MODEL=llama3
-```
-
-## Installing Ollama
-
-### macOS/Linux
-
-```bash
-# Install Ollama
-curl -fsSL https://ollama.ai/install.sh | sh
-
-# Start Ollama service
-ollama serve
-
-# Pull a model
-ollama pull llama3
-```
-
-### Docker
-
-```bash
-docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
-docker exec -it ollama ollama pull llama3
-```
-
-## Supported Models
-
-### Popular Models
-
-- **llama3** - Meta's Llama 3 (8B, 70B)
-- **mistral** - Mistral 7B
-- **gemma** - Google's Gemma (2B, 7B)
-- **codellama** - Code-specialized Llama
-- **mixtral** - Mixture of experts model
-- **phi** - Microsoft's Phi-2
-- **neural-chat** - Intel's fine-tuned model
-- **qwen** - Alibaba's Qwen models
-
-### List Available Models
-
-```ruby
-class OllamaAdmin < ApplicationAgent
- generate_with :ollama
-
- def list_models
- # Get list of installed models
- response = HTTParty.get("#{ollama_host}/api/tags")
- response["models"]
- end
-
- private
-
- def ollama_host
- Rails.configuration.active_agent.dig(:ollama, :host) || "http://localhost:11434"
- end
-end
-```
-
-## Features
-
-### Local Inference
-
-Run models completely offline:
-
-```ruby
-class PrivateDataAgent < ApplicationAgent
- generate_with :ollama, model: "llama3"
-
- def process_sensitive_data
- @data = params[:sensitive_data]
- # Data never leaves your infrastructure
- prompt instructions: "Process this confidential information"
- end
-end
-```
-
-### Model Switching
-
-Easily switch between models:
-
-```ruby
-class MultiModelAgent < ApplicationAgent
- def code_review
- # Use specialized code model
- self.class.generate_with :ollama, model: "codellama"
- @code = params[:code]
- prompt
- end
-
- def general_chat
- # Use general purpose model
- self.class.generate_with :ollama, model: "llama3"
- @message = params[:message]
- prompt
- end
-end
-```
-
-### Custom Models
-
-Use fine-tuned or custom models:
-
-```ruby
-class CustomModelAgent < ApplicationAgent
- generate_with :ollama, model: "my-custom-model:latest"
-
- before_action :ensure_model_exists
-
- private
-
- def ensure_model_exists
- # Check if model is available
- models = fetch_available_models
- unless models.include?(generation_provider.model)
- raise "Model #{generation_provider.model} not found. Run: ollama pull #{generation_provider.model}"
- end
- end
-end
-```
-
-### Structured Output
-
-Ollama can generate JSON-formatted responses through careful prompting and model selection. While Ollama doesn't have native structured output like OpenAI, many models can reliably produce JSON when properly instructed.
-
-#### Approach
-
-To get structured output from Ollama:
-
-1. **Choose the right model** - Models like Llama 3, Mixtral, and Mistral are good at following formatting instructions
-2. **Use clear prompts** - Explicitly request JSON format in your instructions
-3. **Set low temperature** - Use values like 0.1-0.3 for more consistent formatting
-4. **Parse and validate** - Always validate the response as it may not be valid JSON
-
-#### Example Approach
-
-```ruby
-class OllamaAgent < ApplicationAgent
- generate_with :ollama,
- model: "llama3",
- temperature: 0.1 # Lower temperature for consistency
-
- def extract_with_json_prompt
- prompt(
- instructions: <<~INST,
- You must respond ONLY with valid JSON.
- Extract the key information and format as:
- {"field1": "value", "field2": "value"}
- No explanation, just the JSON object.
- INST
- message: params[:text]
- )
- end
-end
-
-# Usage - parse with error handling
-response = agent.extract_with_json_prompt.generate_now
-begin
- data = JSON.parse(response.message.content)
-rescue JSON::ParserError
- # Handle malformed JSON
-end
-```
-
-#### Best Practices
-
-1. **Model Selection**: Test different models to find which works best for your use case
-2. **Prompt Engineering**: Be very explicit about JSON requirements
-3. **Validation**: Always validate and handle parsing errors
-4. **Local Processing**: Ideal for sensitive data that must stay on-premise
-
-#### Limitations
-
-- No guaranteed JSON output like OpenAI's strict mode
-- Quality varies significantly by model
-- May require multiple attempts or fallback logic
-- Complex schemas may be challenging
-
-For reliable structured output, consider using [OpenAI](/docs/generation-providers/openai-provider#structured-output) or [OpenRouter](/docs/generation-providers/open-router-provider#structured-output-support) providers. For local processing requirements where Ollama is necessary, implement robust validation and error handling.
-
-See the [Structured Output guide](/docs/active-agent/structured-output) for more information about structured output patterns.
-
-### Streaming Responses
-
-Stream responses for better UX:
-
-```ruby
-class StreamingOllamaAgent < ApplicationAgent
- generate_with :ollama,
- model: "llama3",
- stream: true
-
- on_message_chunk do |chunk|
- # Handle streaming chunks
- Rails.logger.info "Chunk: #{chunk}"
- broadcast_to_client(chunk)
- end
-
- def chat
- prompt(message: params[:message])
- end
-end
-```
-
-### Embeddings Support
-
-Generate embeddings locally using Ollama's embedding models. See the [Embeddings Framework Documentation](/docs/framework/embeddings) for comprehensive coverage.
-
-#### Basic Embedding Generation
-
-<<< @/../test/generation_provider/ollama_provider_test.rb#ollama_provider_embed{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-::: warning Connection Required
-Ollama must be running locally. If you see connection errors, start Ollama with:
-```bash
-ollama serve
-```
-:::
-
-#### Available Embedding Models
-
-- **nomic-embed-text** - High-quality text embeddings (768 dimensions)
-- **mxbai-embed-large** - Large embedding model (1024 dimensions)
-- **all-minilm** - Lightweight embeddings (384 dimensions)
-
-#### Pull Embedding Models
-
-```bash
-# Install embedding models
-ollama pull nomic-embed-text
-ollama pull mxbai-embed-large
-```
-
-#### Error Handling
-
-Ollama provides helpful error messages when the service is not available:
-
-<<< @/../test/generation_provider/ollama_provider_test.rb#113-136{ruby:line-numbers}
-
-This ensures developers get clear feedback about connection issues.
-
-For more embedding patterns and examples, see the [Embeddings Documentation](/docs/framework/embeddings).
-
-## Provider-Specific Parameters
-
-### Model Parameters
-
-- **`model`** - Model name (e.g., "llama3", "mistral")
-- **`embedding_model`** - Embedding model name (e.g., "nomic-embed-text")
-- **`temperature`** - Controls randomness (0.0 to 1.0)
-- **`top_p`** - Nucleus sampling
-- **`top_k`** - Top-k sampling
-- **`num_predict`** - Maximum tokens to generate
-- **`stop`** - Stop sequences
-- **`seed`** - For reproducible outputs
-
-### System Configuration
-
-- **`host`** - Ollama server URL (default: `http://localhost:11434`)
-- **`timeout`** - Request timeout in seconds
-- **`keep_alive`** - Keep model loaded in memory
-
-### Advanced Options
-
-```ruby
-class AdvancedOllamaAgent < ApplicationAgent
- generate_with :ollama,
- model: "llama3",
- options: {
- num_ctx: 4096, # Context window size
- num_gpu: 1, # Number of GPUs to use
- num_thread: 8, # Number of threads
- repeat_penalty: 1.1, # Penalize repetition
- mirostat: 2, # Mirostat sampling
- mirostat_tau: 5.0, # Mirostat tau parameter
- mirostat_eta: 0.1 # Mirostat learning rate
- }
-end
-```
-
-## Performance Optimization
-
-### Model Loading
-
-Keep models in memory for faster responses:
-
-```ruby
-class FastOllamaAgent < ApplicationAgent
- generate_with :ollama,
- model: "llama3",
- keep_alive: "5m" # Keep model loaded for 5 minutes
-
- def quick_response
- @query = params[:query]
- prompt
- end
-end
-```
-
-### Hardware Acceleration
-
-Configure GPU usage:
-
-```ruby
-class GPUAgent < ApplicationAgent
- generate_with :ollama,
- model: "llama3",
- options: {
- num_gpu: -1, # Use all available GPUs
- main_gpu: 0 # Primary GPU index
- }
-end
-```
-
-### Quantization
-
-Use quantized models for better performance:
-
-```bash
-# Pull quantized versions
-ollama pull llama3:8b-q4_0 # 4-bit quantization
-ollama pull llama3:8b-q5_1 # 5-bit quantization
-```
-
-```ruby
-class EfficientAgent < ApplicationAgent
- # Use quantized model for faster inference
- generate_with :ollama, model: "llama3:8b-q4_0"
-end
-```
-
-## Error Handling
-
-Handle Ollama-specific errors:
-
-```ruby
-class RobustOllamaAgent < ApplicationAgent
- generate_with :ollama, model: "llama3"
-
- rescue_from Faraday::ConnectionFailed do |error|
- Rails.logger.error "Ollama connection failed: #{error.message}"
- render_ollama_setup_instructions
- end
-
- rescue_from ActiveAgent::GenerationError do |error|
- if error.message.include?("model not found")
- pull_model_and_retry
- else
- raise
- end
- end
-
- private
-
- def pull_model_and_retry
- system("ollama pull #{generation_provider.model}")
- retry
- end
-
- def render_ollama_setup_instructions
- "Ollama is not running. Start it with: ollama serve"
- end
-end
-```
-
-## Testing
-
-Test with Ollama locally:
-
-```ruby
-class OllamaAgentTest < ActiveSupport::TestCase
- setup do
- skip "Ollama not available" unless ollama_available?
- end
-
- test "generates response with local model" do
- response = OllamaAgent.with(
- message: "Hello"
- ).prompt_context.generate_now
-
- assert_not_nil response.message.content
- doc_example_output(response)
- end
-
- private
-
- def ollama_available?
- response = Net::HTTP.get_response(URI("http://localhost:11434/api/tags"))
- response.code == "200"
- rescue
- false
- end
-end
-```
-
-## Development Workflow
-
-### Local Development Setup
-
-```ruby
-# config/environments/development.rb
-Rails.application.configure do
- config.active_agent = {
- ollama: {
- host: ENV['OLLAMA_HOST'] || 'http://localhost:11434',
- model: ENV['OLLAMA_MODEL'] || 'llama3',
- options: {
- num_ctx: 4096,
- temperature: 0.7
- }
- }
- }
-end
-```
-
-### Docker Compose Setup
-
-```yaml
-# docker-compose.yml
-version: '3.8'
-services:
- ollama:
- image: ollama/ollama
- ports:
- - "11434:11434"
- volumes:
- - ollama_data:/root/.ollama
- deploy:
- resources:
- reservations:
- devices:
- - driver: nvidia
- count: all
- capabilities: [gpu]
-
-volumes:
- ollama_data:
-```
-
-## Best Practices
-
-1. **Pre-pull models** - Download models before first use
-2. **Monitor memory usage** - Large models require significant RAM
-3. **Use appropriate models** - Balance size and capability
-4. **Keep models loaded** - Use keep_alive for frequently used models
-5. **Implement fallbacks** - Handle connection failures gracefully
-6. **Use quantization** - Reduce memory usage and increase speed
-7. **Test locally** - Ensure models work before deployment
-
-## Ollama-Specific Considerations
-
-### Privacy First
-
-```ruby
-class PrivacyFirstAgent < ApplicationAgent
- generate_with :ollama, model: "llama3"
-
- def process_pii
- @personal_data = params[:personal_data]
-
- # Data stays local - no external API calls
- Rails.logger.info "Processing PII locally with Ollama"
-
- prompt instructions: "Process this data privately"
- end
-end
-```
-
-### Model Management
-
-```ruby
-class ModelManager
- def self.ensure_model(model_name)
- models = list_models
- unless models.include?(model_name)
- pull_model(model_name)
- end
- end
-
- def self.list_models
- response = HTTParty.get("http://localhost:11434/api/tags")
- response["models"].map { |m| m["name"] }
- end
-
- def self.pull_model(model_name)
- system("ollama pull #{model_name}")
- end
-
- def self.delete_model(model_name)
- HTTParty.delete("http://localhost:11434/api/delete",
- body: { name: model_name }.to_json,
- headers: { 'Content-Type' => 'application/json' }
- )
- end
-end
-```
-
-### Deployment Considerations
-
-```ruby
-# Ensure Ollama is available in production
-class ApplicationAgent < ActiveAgent::Base
- before_action :ensure_ollama_available, if: :using_ollama?
-
- private
-
- def using_ollama?
- generation_provider.is_a?(ActiveAgent::GenerationProvider::OllamaProvider)
- end
-
- def ensure_ollama_available
- HTTParty.get("#{ollama_host}/api/tags")
- rescue => e
- raise "Ollama is not available: #{e.message}"
- end
-
- def ollama_host
- Rails.configuration.active_agent.dig(:ollama, :host)
- end
-end
-```
-
-## Related Documentation
-
-- [Embeddings Framework](/docs/framework/embeddings) - Complete guide to embeddings
-- [Generation Provider Overview](/docs/framework/generation-provider)
-- [OpenAI Provider](/docs/generation-providers/openai-provider) - Cloud-based alternative with more models
-- [Configuration Guide](/docs/getting-started#configuration)
-- [Ollama Documentation](https://ollama.ai/docs)
-- [Ollama Model Library](https://ollama.ai/library) - Available models including embedding models
-- [OpenRouter Provider](/docs/generation-providers/open-router-provider) - For cloud alternative
\ No newline at end of file
diff --git a/docs/docs/generation-providers/open-router-provider.md b/docs/docs/generation-providers/open-router-provider.md
deleted file mode 100644
index 87e726e8..00000000
--- a/docs/docs/generation-providers/open-router-provider.md
+++ /dev/null
@@ -1,278 +0,0 @@
-# OpenRouter Provider
-
-OpenRouter provides access to multiple AI models through a unified API, with advanced features like fallback models, multimodal support, and PDF processing.
-
-## Configuration
-
-Configure OpenRouter in your agent:
-
-<<< @/../test/dummy/app/agents/open_router_agent.rb#snippet{ruby:line-numbers}
-
-## Features
-
-### Structured Output Support
-
-OpenRouter supports structured output for compatible models (like OpenAI's GPT-4o and GPT-4o-mini), allowing you to receive responses in a predefined JSON schema format. This is particularly useful for data extraction tasks.
-
-#### Compatible Models
-
-Models that support both vision capabilities AND structured output:
-- `openai/gpt-4o`
-- `openai/gpt-4o-mini`
-- `openai/gpt-4-turbo` (structured output only, no vision)
-- `openai/gpt-3.5-turbo` variants (structured output only, no vision)
-
-#### Using Structured Output
-
-Define your schema and pass it to the `prompt` method:
-
-```ruby
-class OpenRouterAgent < ApplicationAgent
- generate_with :open_router, model: "openai/gpt-4o-mini"
-
- def analyze_image
- @image_url = params[:image_url]
-
- prompt(
- message: build_image_message,
- output_schema: image_analysis_schema
- )
- end
-
- private
-
- def image_analysis_schema
- {
- name: "image_analysis",
- strict: true,
- schema: {
- type: "object",
- properties: {
- description: { type: "string" },
- objects: {
- type: "array",
- items: {
- type: "object",
- properties: {
- name: { type: "string" },
- position: { type: "string" },
- color: { type: "string" }
- },
- required: ["name", "position", "color"],
- additionalProperties: false
- }
- },
- scene_type: {
- type: "string",
- enum: ["indoor", "outdoor", "abstract", "document", "photo", "illustration"]
- }
- },
- required: ["description", "objects", "scene_type"],
- additionalProperties: false
- }
- }
- end
-end
-```
-
-::: tip
-When using `strict: true` with OpenAI models, all properties defined in your schema must be included in the `required` array. This ensures deterministic responses.
-:::
-
-For more comprehensive structured output examples, including receipt data extraction and document parsing, see the [Data Extraction Agent documentation](/docs/agents/data-extraction-agent#structured-output).
-
-### Multimodal Support
-
-OpenRouter supports vision-capable models for image analysis:
-
-<<< @/../test/agents/open_router_integration_test.rb#36-62{ruby:line-numbers}
-
-::: details Image Analysis with Structured Output
-
-:::
-
-### Receipt Data Extraction with Structured Output
-
-Extract structured data from receipts and documents using OpenRouter's structured output capabilities. This example demonstrates how to parse receipt images and extract specific fields like merchant information, items, and totals.
-
-#### Test Implementation
-
-<<< @/../test/agents/open_router_integration_test.rb#receipt_extraction_test{ruby:line-numbers}
-
-#### Receipt Schema Definition
-
-<<< @/../test/dummy/app/agents/open_router_integration_agent.rb#receipt_schema{ruby:line-numbers}
-
-The receipt schema ensures consistent extraction of:
-- Merchant name and address
-- Individual line items with names and prices
-- Subtotal, tax, and total amounts
-- Currency information
-
-::: details Receipt Extraction Example Output
-
-:::
-
-::: tip
-This example uses structured output to ensure the receipt data is returned in a consistent JSON format. For more examples of structured data extraction from various document types, see the [Data Extraction Agent documentation](/docs/agents/data-extraction-agent#structured-output).
-:::
-
-### PDF Processing
-
-OpenRouter supports PDF processing with various engines:
-
-<<< @/../test/agents/open_router_integration_test.rb#pdf_processing_local{ruby:line-numbers}
-
-::: details PDF Processing Example
-
-:::
-
-#### PDF Processing Options
-
-OpenRouter offers multiple PDF processing engines:
-
-- **Native Engine**: Charged as input tokens, best for models with built-in PDF support
-- **Mistral OCR Engine**: $2 per 1000 pages, optimized for scanned documents
-- **No Plugin**: For models that have built-in PDF capabilities
-
-Example with OCR engine:
-
-<<< @/../test/agents/open_router_integration_test.rb#pdf_native_support{ruby:line-numbers}
-
-::: details OCR Processing Example
-
-:::
-
-### Fallback Models
-
-Configure fallback models for improved reliability:
-
-<<< @/../test/agents/open_router_integration_test.rb#340-361{ruby:line-numbers}
-
-::: details Fallback Model Example
-
-:::
-
-### Content Transforms
-
-Apply transforms for handling long content:
-
-<<< @/../test/agents/open_router_integration_test.rb#363-380{ruby:line-numbers}
-
-::: details Transform Example
-
-:::
-
-### Usage and Cost Tracking
-
-Track token usage and costs for OpenRouter requests:
-
-<<< @/../test/agents/open_router_integration_test.rb#382-420{ruby:line-numbers}
-
-::: details Usage Tracking Example
-
-:::
-
-## Provider Preferences
-
-Configure provider preferences for routing and data collection:
-
-<<< @/../test/agents/open_router_integration_test.rb#437-454{ruby:line-numbers}
-
-### Data Collection Policies
-
-OpenRouter supports configuring data collection policies to control which providers can collect and use your data for training. According to the [OpenRouter documentation](https://openrouter.ai/docs/features/provider-routing#requiring-providers-to-comply-with-data-policies), you can configure this in three ways:
-
-1. **Allow all providers** (default): All providers can collect data
-2. **Deny all providers**: No providers can collect data
-3. **Selective providers**: Only specified providers can collect data
-
-#### Configuration Examples
-
-<<< @/../test/agents/open_router_integration_test.rb#456-479{ruby:line-numbers}
-
-#### Real-World Example: Privacy-Focused Agent
-
-Here's a complete example of an agent configured to handle sensitive data with strict privacy controls:
-
-<<< @/../test/dummy/app/agents/privacy_focused_agent.rb#privacy_agent_config{ruby:line-numbers}
-
-Processing sensitive financial data:
-
-<<< @/../test/dummy/app/agents/privacy_focused_agent.rb#process_financial_data{ruby:line-numbers}
-
-Selective provider data collection for medical records:
-
-<<< @/../test/dummy/app/agents/privacy_focused_agent.rb#process_medical_records{ruby:line-numbers}
-
-You can configure data collection at multiple levels:
-
-```ruby
-# In config/active_agent.yml
-development:
- open_router:
- api_key: <%= Rails.application.credentials.dig(:open_router, :api_key) %>
- model: openai/gpt-4o
- data_collection: deny # Deny all providers from collecting data
- require_parameters: true # Require model providers to support all specified parameters
-
-# Or allow specific providers only
-production:
- open_router:
- api_key: <%= Rails.application.credentials.dig(:open_router, :api_key) %>
- model: openai/gpt-4o
- data_collection: ["OpenAI", "Google"] # Only these providers can collect data
- require_parameters: false # Allow fallback to providers that don't support all parameters
-
-# In your agent configuration
-class PrivacyFocusedAgent < ApplicationAgent
- generate_with :open_router,
- model: "openai/gpt-4o",
- data_collection: "deny", # Override for this specific agent
- require_parameters: true # Ensure all parameters are supported
-end
-```
-
-::: warning Privacy Considerations
-When handling sensitive data, consider setting `data_collection: "deny"` to ensure your data is not used for model training. This is especially important for:
-- Personal information
-- Proprietary business data
-- Medical or financial records
-- Confidential communications
-:::
-
-::: tip
-The `data_collection` parameter respects OpenRouter's provider compliance requirements. Providers that don't comply with your data collection policy will be automatically excluded from the routing pool.
-:::
-
-## Headers and Site Configuration
-
-OpenRouter supports custom headers for tracking and attribution:
-
-<<< @/../test/agents/open_router_integration_test.rb#420-432{ruby:line-numbers}
-
-## Model Capabilities Detection
-
-The provider automatically detects model capabilities:
-
-<<< @/../test/agents/open_router_integration_test.rb#16-33{ruby:line-numbers}
-
-## Important Notes
-
-### Model Compatibility
-
-When using OpenRouter's advanced features, ensure your chosen model supports the required capabilities:
-
-- **Structured Output**: Requires models like `openai/gpt-4o`, `openai/gpt-4o-mini`, or other OpenAI models with structured output support
-- **Vision/Image Analysis**: Requires vision-capable models like GPT-4o, Claude 3, or Gemini Pro Vision
-- **PDF Processing**: May require specific plugins or engines depending on the model and document type
-
-For tasks requiring both vision and structured output (like receipt extraction), use models that support both capabilities, such as:
-- `openai/gpt-4o`
-- `openai/gpt-4o-mini`
-
-## See Also
-
-- [Data Extraction Agent](/docs/agents/data-extraction-agent) - Comprehensive examples of structured data extraction
-- [Generation Provider Overview](/docs/framework/generation-provider) - Understanding provider architecture
-- [OpenRouter API Documentation](https://openrouter.ai/docs) - Official OpenRouter documentation
diff --git a/docs/docs/generation-providers/openai-provider.md b/docs/docs/generation-providers/openai-provider.md
deleted file mode 100644
index b7a19d19..00000000
--- a/docs/docs/generation-providers/openai-provider.md
+++ /dev/null
@@ -1,473 +0,0 @@
-# OpenAI Provider
-
-The OpenAI provider enables integration with OpenAI's GPT models including GPT-4, GPT-4 Turbo, and GPT-3.5 Turbo. It supports advanced features like function calling, streaming responses, and structured outputs.
-
-## Configuration
-
-### Basic Setup
-
-Configure OpenAI in your agent:
-
-<<< @/../test/dummy/app/agents/open_ai_agent.rb#snippet{ruby:line-numbers}
-
-### Configuration File
-
-Set up OpenAI credentials in `config/active_agent.yml`:
-
-::: code-group
-
-<<< @/../test/dummy/config/active_agent.yml#openai_anchor{yaml:line-numbers}
-
-<<< @/../test/dummy/config/active_agent.yml#openai_dev_config{yaml:line-numbers}
-
-:::
-
-### Environment Variables
-
-Alternatively, use environment variables:
-
-```bash
-OPENAI_ACCESS_TOKEN=your-api-key
-OPENAI_ORGANIZATION_ID=your-org-id # Optional
-```
-
-## Supported Models
-
-### Chat Completions API Models
-- **GPT-4o** - Most capable model with vision capabilities
-- **GPT-4o-mini** - Smaller, faster version of GPT-4o
-- **GPT-4o-search-preview** - GPT-4o with built-in web search
-- **GPT-4o-mini-search-preview** - GPT-4o-mini with built-in web search
-- **GPT-4 Turbo** - Latest GPT-4 with 128k context
-- **GPT-4** - Original GPT-4 model
-- **GPT-3.5 Turbo** - Fast and cost-effective
-
-### Responses API Models
-- **GPT-5** - Advanced model with support for all built-in tools
-- **GPT-4.1** - Enhanced GPT-4 with tool support
-- **GPT-4.1-mini** - Efficient version with tool support
-- **o3** - Reasoning model with advanced capabilities
-- **o4-mini** - Compact reasoning model
-
-Note: Built-in tools like MCP and image generation require the Responses API and compatible models.
-
-## Features
-
-### Function Calling
-
-OpenAI supports native function calling with automatic tool execution:
-
-```ruby
-class DataAnalysisAgent < ApplicationAgent
- generate_with :openai, model: "gpt-4o"
-
- def analyze_data
- @data = params[:data]
- prompt # Will include all public methods as available tools
- end
-
- def calculate_average(numbers:)
- numbers.sum.to_f / numbers.size
- end
-
- def fetch_external_data(endpoint:)
- # Tool that OpenAI can call
- HTTParty.get(endpoint)
- end
-end
-```
-
-### Streaming Responses
-
-Enable real-time streaming for better user experience:
-
-```ruby
-class StreamingAgent < ApplicationAgent
- generate_with :openai, stream: true
-
- on_message_chunk do |chunk|
- # Handle streaming chunks
- broadcast_to_user(chunk)
- end
-
- def chat
- prompt(message: params[:message])
- end
-end
-```
-
-### Vision Capabilities
-
-GPT-4o models support image analysis:
-
-```ruby
-class VisionAgent < ApplicationAgent
- generate_with :openai, model: "gpt-4o"
-
- def analyze_image
- @image_url = params[:image_url]
- prompt content_type: :text
- end
-end
-
-# In your view (analyze_image.text.erb):
-# Analyze this image: <%= @image_url %>
-```
-
-### Structured Output
-
-OpenAI provides native structured output support, ensuring responses conform to specified JSON schemas. This feature is available with GPT-4o, GPT-4o-mini, and GPT-3.5-turbo models.
-
-#### Supported Models
-
-Models with full structured output support:
-- **GPT-4o** - Vision + structured output
-- **GPT-4o-mini** - Vision + structured output
-- **GPT-4-turbo** - Structured output only (no vision)
-- **GPT-3.5-turbo** - Structured output only
-
-#### Basic Usage
-
-Enable JSON mode with a schema:
-
-```ruby
-class StructuredAgent < ApplicationAgent
- generate_with :openai,
- model: "gpt-4o",
- response_format: { type: "json_object" }
-
- def extract_entities
- @text = params[:text]
- prompt(
- output_schema: :entity_extraction,
- instructions: "Extract entities and return as JSON"
- )
- end
-end
-```
-
-#### With Schema Generator
-
-Use ActiveAgent's schema generator for automatic schema creation:
-
-<<< @/../test/integration/structured_output_json_parsing_test.rb#34-70{ruby:line-numbers}
-
-#### Strict Mode
-
-OpenAI supports strict schema validation to guarantee output format:
-
-```ruby
-schema = {
- name: "user_data",
- strict: true,
- schema: {
- type: "object",
- properties: {
- name: { type: "string" },
- age: { type: "integer" },
- email: { type: "string", format: "email" }
- },
- required: ["name", "age", "email"],
- additionalProperties: false
- }
-}
-
-response = agent.prompt(
- message: "Extract user information",
- output_schema: schema
-).generate_now
-```
-
-#### Response Handling
-
-Structured output responses are automatically parsed:
-
-```ruby
-response = OpenAIAgent.with(
- message: "Extract data from: John Doe, 30, john@example.com"
-).extract_with_schema.generate_now
-
-# Automatic JSON parsing
-response.message.content_type # => "application/json"
-response.message.content # => {"name" => "John Doe", "age" => 30, "email" => "john@example.com"}
-response.message.raw_content # => '{"name":"John Doe","age":30,"email":"john@example.com"}'
-```
-
-#### Best Practices
-
-1. **Use strict mode** for production applications requiring guaranteed format
-2. **Leverage model schemas** from ActiveRecord/ActiveModel for consistency
-3. **Test with VCR** to ensure schemas work with actual API responses
-4. **Handle edge cases** like empty or invalid inputs gracefully
-
-#### Limitations
-
-- Maximum schema complexity varies by model
-- Very large schemas may impact token limits
-- Not all JSON Schema features are supported (check OpenAI docs for specifics)
-
-See the [Structured Output guide](/docs/active-agent/structured-output) for comprehensive documentation and examples.
-
-### Built-in Tools (Responses API)
-
-OpenAI's Responses API provides powerful built-in tools for web search, image generation, and MCP integration:
-
-#### Web Search
-
-Enable web search capabilities using the `web_search_preview` tool:
-
-<<< @/../test/dummy/app/agents/web_search_agent.rb#17-36{ruby:line-numbers}
-
-For Chat Completions API with specific models, use `web_search_options`:
-
-<<< @/../test/dummy/app/agents/web_search_agent.rb#52-72{ruby:line-numbers}
-
-#### Image Generation
-
-Generate and edit images using the `image_generation` tool:
-
-<<< @/../test/dummy/app/agents/multimodal_agent.rb#6-26{ruby:line-numbers}
-
-#### MCP (Model Context Protocol) Integration
-
-Connect to external services and MCP servers:
-
-<<< @/../test/dummy/app/agents/mcp_integration_agent.rb#6-29{ruby:line-numbers}
-
-Connect to custom MCP servers:
-
-<<< @/../test/dummy/app/agents/mcp_integration_agent.rb#31-50{ruby:line-numbers}
-
-Available MCP Connectors:
-- **Dropbox** - `connector_dropbox`
-- **Gmail** - `connector_gmail`
-- **Google Calendar** - `connector_googlecalendar`
-- **Google Drive** - `connector_googledrive`
-- **Microsoft Teams** - `connector_microsoftteams`
-- **Outlook Calendar** - `connector_outlookcalendar`
-- **Outlook Email** - `connector_outlookemail`
-- **SharePoint** - `connector_sharepoint`
-- **GitHub** - Use server URL: `https://api.githubcopilot.com/mcp/`
-
-#### Combining Multiple Tools
-
-Use multiple built-in tools together:
-
-<<< @/../test/dummy/app/agents/multimodal_agent.rb#28-49{ruby:line-numbers}
-
-### Using Concerns for Shared Tools
-
-Create reusable tool configurations with concerns:
-
-<<< @/../test/dummy/app/agents/concerns/research_tools.rb#1-61{ruby:line-numbers}
-
-Use the concern in your agents:
-
-<<< @/../test/dummy/app/agents/research_agent.rb#1-14{ruby:line-numbers}
-
-### Tool Configuration Example
-
-Here's how built-in tools are configured in the prompt options:
-
-<<< @/../test/agents/builtin_tools_doc_test.rb#tool_configuration_example{ruby:line-numbers}
-
-::: details Configuration Output
-
-### Embeddings
-
-Generate high-quality text embeddings using OpenAI's embedding models. See the [Embeddings Framework Documentation](/docs/framework/embeddings) for comprehensive coverage.
-
-#### Basic Embedding Generation
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_openai_model_config{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-#### Available Embedding Models
-
-- **text-embedding-3-large** - Highest quality (3072 dimensions, configurable down to 256)
-- **text-embedding-3-small** - Balanced performance (1536 dimensions, configurable)
-- **text-embedding-ada-002** - Legacy model (1536 dimensions, fixed)
-
-For detailed model comparisons and benchmarks, see [OpenAI's Embeddings Documentation](https://platform.openai.com/docs/guides/embeddings).
-
-#### Similarity Search Example
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_similarity_search{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-For more advanced embedding patterns, see the [Embeddings Documentation](/docs/framework/embeddings).
-
-#### Dimension Configuration
-
-OpenAI's text-embedding-3 models support configurable dimensions:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_dimension_test{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-::: tip Dimension Reduction
-OpenAI's text-embedding-3-large and text-embedding-3-small models support native dimension reduction by specifying a `dimensions` parameter. This can significantly reduce storage costs while maintaining good performance.
-:::
-
-#### Batch Processing
-
-Efficiently process multiple embeddings:
-
-<<< @/../test/agents/embedding_agent_test.rb#embedding_batch_processing{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-#### Cost Optimization for Embeddings
-
-Choose the right model based on your needs:
-
-| Model | Dimensions | Cost per 1M tokens | Best for |
-|-------|------------|-------------------|----------|
-| text-embedding-3-large | 3072 (configurable) | $0.13 | Highest quality, semantic search |
-| text-embedding-3-small | 1536 (configurable) | $0.02 | Good balance, most applications |
-| text-embedding-ada-002 | 1536 | $0.10 | Legacy support |
-
-::: tip Cost Savings
-- Use text-embedding-3-small for most applications (85% cheaper than large)
-- Cache embeddings aggressively - they don't change for the same input
-- Consider dimension reduction for large-scale applications
-:::
-
-## Provider-Specific Parameters
-
-### Model Parameters
-
-- **`model`** - Model identifier (e.g., "gpt-4o", "gpt-3.5-turbo")
-- **`embedding_model`** - Embedding model (e.g., "text-embedding-3-large")
-- **`dimensions`** - Reduced dimensions for embeddings (for 3-large and 3-small models)
-- **`temperature`** - Controls randomness (0.0 to 2.0)
-- **`max_tokens`** - Maximum tokens in response
-- **`top_p`** - Nucleus sampling parameter
-- **`frequency_penalty`** - Penalize frequent tokens (-2.0 to 2.0)
-- **`presence_penalty`** - Penalize new topics (-2.0 to 2.0)
-- **`seed`** - For deterministic outputs
-- **`response_format`** - Output format ({ type: "json_object" } or { type: "text" })
-
-### Organization Settings
-
-- **`organization_id`** - OpenAI organization ID
-- **`project_id`** - OpenAI project ID for usage tracking
-
-### Advanced Options
-
-- **`stream`** - Enable streaming responses (true/false)
-- **`tools`** - Array of built-in tools for Responses API (web_search_preview, image_generation, mcp)
-- **`tool_choice`** - Control tool usage ("auto", "required", "none", or specific tool)
-- **`parallel_tool_calls`** - Allow parallel tool execution (true/false)
-- **`use_responses_api`** - Force use of Responses API (true/false)
-- **`web_search`** - Web search configuration for Chat API with search-preview models
-- **`web_search_options`** - Alternative parameter name for web search in Chat API
-
-## Azure OpenAI
-
-For Azure OpenAI Service, configure a custom host:
-
-```ruby
-class AzureAgent < ApplicationAgent
- generate_with :openai,
- access_token: Rails.application.credentials.dig(:azure, :api_key),
- host: "https://your-resource.openai.azure.com",
- api_version: "2024-02-01",
- model: "your-deployment-name"
-end
-```
-
-## Error Handling
-
-Handle OpenAI-specific errors:
-
-```ruby
-class RobustAgent < ApplicationAgent
- generate_with :openai,
- max_retries: 3,
- request_timeout: 30
-
- rescue_from OpenAI::RateLimitError do |error|
- Rails.logger.error "Rate limit hit: #{error.message}"
- retry_with_backoff
- end
-
- rescue_from OpenAI::APIError do |error|
- Rails.logger.error "OpenAI API error: #{error.message}"
- fallback_response
- end
-end
-```
-
-## Testing
-
-Use VCR for consistent tests:
-
-<<< @/../test/agents/open_ai_agent_test.rb#4-15{ruby:line-numbers}
-
-## Cost Optimization
-
-### Use Appropriate Models
-
-- Use GPT-3.5 Turbo for simple tasks
-- Reserve GPT-4o for complex reasoning
-- Consider GPT-4o-mini for a balance
-
-### Optimize Token Usage
-
-```ruby
-class EfficientAgent < ApplicationAgent
- generate_with :openai,
- model: "gpt-3.5-turbo",
- max_tokens: 500, # Limit response length
- temperature: 0.3 # More focused responses
-
- def summarize
- @content = params[:content]
- # Truncate input if needed
- @content = @content.truncate(3000) if @content.length > 3000
- prompt
- end
-end
-```
-
-### Cache Responses
-
-```ruby
-class CachedAgent < ApplicationAgent
- generate_with :openai
-
- def answer_faq
- question = params[:question]
-
- Rails.cache.fetch("faq/#{question.parameterize}", expires_in: 1.day) do
- prompt(message: question).generate_now
- end
- end
-end
-```
-
-## Best Practices
-
-1. **Set appropriate temperature** - Lower for factual tasks, higher for creative
-2. **Use system messages effectively** - Provide clear instructions
-3. **Implement retry logic** - Handle transient failures
-4. **Monitor usage** - Track token consumption and costs
-5. **Use the latest models** - They're often more capable and cost-effective
-6. **Validate outputs** - Especially for critical applications
-
-## Related Documentation
-
-- [Generation Provider Overview](/docs/framework/generation-provider)
-- [Configuration Guide](/docs/getting-started#configuration)
-- [OpenAI API Documentation](https://platform.openai.com/docs)
\ No newline at end of file
diff --git a/docs/docs/getting-started.md b/docs/docs/getting-started.md
deleted file mode 100644
index 841bb44f..00000000
--- a/docs/docs/getting-started.md
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: Getting Started
----
-# {{ $frontmatter.title }}
-
-This guide will help you set up and create your first ActiveAgent application.
-
-## Installation
-
-Use bundler to add activeagent to your Gemfile and install:
-
-```bash
-bundle add activeagent
-```
-
-Add the generation provider gem you want to use:
-
-::: code-group
-
-```bash [OpenAI]
-bundle add ruby-openai
-```
-
-```bash [Anthropic]
-bundle add ruby-anthropic
-```
-
-```bash [Ollama]
-# Ollama follows the same API spec as OpenAI, so you can use the same gem.
-bundle add ruby-openai
-```
-
-```bash [OpenRouter]
-bundle add ruby-openai
-# OpenRouter follows the same API spec as OpenAI, so you can use the same gem.
-```
-
-:::
-
-Then install the gems by running:
-
-```bash
-bundle install
-```
-### Active Agent install generator
-To set up Active Agent in your Rails application, you can use the install generator. This will create the necessary configuration files and directories for Active Agent.
-
-```bash
-rails generate active_agent:install
-```
-This command will create the following files and directories:
-- `config/active_agent.yml`: The configuration file for Active Agent, where you can specify your generation providers and their settings.
-- `app/agents`: The directory where your agent classes will be stored.
-- `app/views/layouts/agent.text.erb`: The layout file for your agent prompt/view templates.
-- `app/views/agent_*`: The directory where your agent prompt/view templates will be stored.
-
-## Usage
-Active Agent is designed to work seamlessly with Rails applications. It can be easily integrated into your existing Rails app without any additional configuration. The framework automatically detects the Rails environment and configures itself accordingly.
-
-You can start by defining an `ApplicationAgent` class that inherits from `ActiveAgent::Base`. This class will define the actions and behaviors of your application's base agent. You can then use the `generate_with` method to specify the generation provider for your agent.
-
-<<< @/../test/dummy/app/agents/application_agent.rb {ruby}
-
-This sets up the `ApplicationAgent` to use OpenAI as the generation provider. You can replace `:openai` with any other supported provider, such as `:anthropic`, `:google`, or `:ollama`. [Learn more about generation providers and their configuration →](/docs/framework/generation-provider)
-
-Now, you can interact with your application agent using the default `prompt_context` method. This method allows you to provide a context for the agent to generate a response based on the defined actions and behaviors:
-
-<<< @/../test/agents/application_agent_test.rb#application_agent_prompt_context_message_generation{ruby:line-numbers}
-
-::: details Response Example
-
-:::
-
-This code parameterizes the `ApplicationAgent` `with` a set of `params`.
-
-## Configuration
-### Generation Provider Configuration
-Active Agent supports multiple generation providers, including OpenAI, Anthropic, and Ollama. You can configure these providers in your Rails application using the `config/active_agent.yml` file. This file allows you to specify the API keys, models, and other settings for each provider. This is similar to Active Storage service configurations.
-
-<<< @/../test/dummy/config/active_agent.yml{yaml:line-numbers}
-
-### Configuring custom hosts
-You can also set the host and port for the generation provider if needed. For example, if you are using a local instance of Ollama or a cloud provider's hosted instance of OpenAI, you can set the host in your configuration file as shown in the example above.
-
-## Your First Agent
-You can generate your first agent using the Rails generator. This will create a new agent class in the `app/agents` directory. It will also create a corresponding view template for the agent's actions as well as an Application Agent if you don't already have one.
-
-```bash
-$ rails generate active_agent:agent TravelAgent search book confirm
-```
-The `ApplicationAgent` is the base class for all agents in your application, similar to how ApplicationController is the base class for all controllers.
-
-The generator will create:
-- An agent class with the specified actions (`search`, `book`, and `confirm`)
-- View templates for each action in `app/views/travel_agent/`
-- An `ApplicationAgent` base class if one doesn't exist
-
-<<< @/../test/dummy/app/agents/travel_agent.rb {ruby}
-
-Agent action methods are used for building Prompt context objects with Message content from rendered Action Views.
-
-## Action Prompts
-
-Each action is defined as a public instance method that can call `prompt` to build context objects that are used to generate responses. [Learn more about Action Prompts and how they work →](/docs/action-prompt/prompts)
-
-### Instruction messages
-### Prompt messages The views define:
-- **JSON views**: [Tool schemas for function calling](/docs/action-prompt/tools) or [output schemas for structured responses](/docs/active-agent/structured-output)
-- **HTML views**: Web-friendly formatted responses
-- **Text views**: Plain text responses
diff --git a/docs/examples/browser-use-agent.md b/docs/examples/browser-use-agent.md
new file mode 100644
index 00000000..169d6a9c
--- /dev/null
+++ b/docs/examples/browser-use-agent.md
@@ -0,0 +1,589 @@
+---
+title: Browser Use Agent
+---
+# {{ $frontmatter.title }}
+
+Active Agent provides browser automation capabilities through the Browser Use Agent (similar to Anthropic's Computer Use), which can navigate web pages, interact with elements, extract content, and take screenshots using Cuprite/Chrome.
+
+## Overview
+
+The Browser Use Agent demonstrates how ActiveAgent can integrate with external tools like headless browsers to create powerful automation workflows. Following the naming convention of tools like Anthropic's Computer Use, it provides AI-driven browser control using familiar Rails patterns.
+
+## Features
+
+- **Navigate to URLs** - Direct browser navigation to any website
+- **Click elements** - Click buttons, links, or any element using CSS selectors or text
+- **Extract content** - Extract text from specific elements or entire pages
+- **Take screenshots** - Capture full page or specific areas with HD resolution (1920x1080)
+- **Fill forms** - Interact with form fields programmatically
+- **Extract links** - Gather links from pages with optional preview screenshots
+- **Smart content detection** - Automatically detect and focus on main content areas
+
+## Setup
+
+Generate a browser use agent:
+
+```bash
+rails generate active_agent:agent browser_use navigate click extract_text screenshot
+```
+
+## Agent Implementation
+
+::: code-group
+
+```ruby [browser_agent.rb]
+require "capybara"
+require "capybara/cuprite"
+
+class BrowserAgent < ApplicationAgent
+ # Configure AI provider for intelligent automation
+ generate_with :openai, model: "gpt-4o-mini"
+
+ class_attribute :browser_session, default: nil
+
+ # Navigate to a URL
+ def navigate
+ setup_browser_if_needed
+ @url = params[:url]
+
+ begin
+ self.class.browser_session.visit(@url)
+ @status = 200
+ @current_url = self.class.browser_session.current_url
+ @title = self.class.browser_session.title
+ rescue => e
+ @status = 500
+ @error = e.message
+ end
+
+ prompt
+ end
+
+ # Click on an element
+ def click
+ setup_browser_if_needed
+ @selector = params[:selector]
+ @text = params[:text]
+
+ begin
+ if @text
+ self.class.browser_session.click_on(@text)
+ elsif @selector
+ self.class.browser_session.find(@selector).click
+ end
+ @success = true
+ @current_url = self.class.browser_session.current_url
+ rescue => e
+ @success = false
+ @error = e.message
+ end
+
+ prompt
+ end
+
+ # Extract text from the page
+ def extract_text
+ setup_browser_if_needed
+ @selector = params[:selector] || "body"
+
+ begin
+ element = self.class.browser_session.find(@selector)
+ @text = element.text
+ @success = true
+ rescue => e
+ @success = false
+ @error = e.message
+ end
+
+ prompt
+ end
+
+ # Take a screenshot of the current page
+ def screenshot
+ setup_browser_if_needed
+ @filename = params[:filename] || "screenshot_#{Time.now.to_i}.png"
+ @main_content_only = params[:main_content_only] != false # Default to true
+
+ screenshot_dir = Rails.root.join("tmp", "screenshots")
+ FileUtils.mkdir_p(screenshot_dir)
+ @path = screenshot_dir.join(@filename)
+
+ begin
+ options = { path: @path }
+
+ # Auto-detect and crop to main content if enabled
+ if @main_content_only && !params[:selector] && !params[:area]
+ main_area = detect_main_content_area
+ options[:area] = main_area if main_area
+ end
+
+ self.class.browser_session.save_screenshot(**options)
+ @success = true
+ @filepath = @path.to_s
+ rescue => e
+ @success = false
+ @error = e.message
+ end
+
+ prompt
+ end
+
+ # Extract main content from the page
+ def extract_main_content
+ setup_browser_if_needed
+
+ begin
+ content_selectors = [
+ "#mw-content-text", # Wikipedia
+ "main", "article", "[role='main']",
+ ".content", "#content"
+ ]
+
+ @content = nil
+ content_selectors.each do |selector|
+ if self.class.browser_session.has_css?(selector)
+ @content = self.class.browser_session.find(selector).text
+ @selector_used = selector
+ break
+ end
+ end
+
+ @content ||= self.class.browser_session.find("body").text
+ @success = true
+ rescue => e
+ @success = false
+ @error = e.message
+ end
+
+ prompt
+ end
+
+ private
+
+ def setup_browser_if_needed
+ return if self.class.browser_session
+
+ unless Capybara.drivers[:cuprite_agent]
+ Capybara.register_driver :cuprite_agent do |app|
+ Capybara::Cuprite::Driver.new(
+ app,
+ window_size: [1920, 1080],
+ browser_options: {
+ "no-sandbox": nil,
+ "disable-gpu": nil,
+ "disable-dev-shm-usage": nil
+ },
+ inspector: false,
+ headless: true
+ )
+ end
+ end
+
+ self.class.browser_session = Capybara::Session.new(:cuprite_agent)
+ end
+
+ def detect_main_content_area
+ main_selectors = [
+ "main", "[role='main']", "#main-content",
+ "#content", "article", "#mw-content-text"
+ ]
+
+ main_selectors.each do |selector|
+ if self.class.browser_session.has_css?(selector, wait: 0)
+ begin
+ rect = self.class.browser_session.evaluate_script(<<-JS)
+ (function() {
+ var elem = document.querySelector('#{selector}');
+ if (!elem) return null;
+ var rect = elem.getBoundingClientRect();
+ return {
+ x: Math.round(rect.left + window.scrollX),
+ y: Math.round(rect.top + window.scrollY),
+ width: Math.round(rect.width),
+ height: Math.round(rect.height)
+ };
+ })()
+ JS
+
+ if rect && rect["width"] > 0 && rect["height"] > 0
+ start_y = (rect["y"] < 100) ? 150 : rect["y"]
+ return { x: 0, y: start_y, width: 1920, height: 1080 - start_y }
+ end
+ rescue => e
+ # Continue to next selector
+ end
+ end
+ end
+
+ # Default: skip header area
+ { x: 0, y: 150, width: 1920, height: 930 }
+ end
+end
+```
+
+```erb [instructions.text.erb]
+You are a browser automation agent that can navigate web pages and interact with web elements using Cuprite/Chrome.
+
+You have access to the following browser actions:
+<% controller.action_schemas.each do |schema| %>
+- <%= schema["name"] %>: <%= schema["description"] %>
+<% end %>
+
+<% if params[:url].present? %>
+Starting URL: <%= params[:url] %>
+You should navigate to this URL first to begin your research.
+<% end %>
+
+Use these tools to help users automate web browsing tasks, extract information from websites, and perform user interactions.
+
+When researching a topic:
+1. Navigate to the provided URL or search for relevant pages
+2. Extract the main content to understand the topic
+3. Use the click action with specific text to navigate to related pages
+4. Use go_back to return to previous pages when needed
+5. Provide a comprehensive summary with reference URLs
+
+Screenshot tips (browser is 1920x1080 HD resolution):
+- Default screenshots automatically try to crop to main content
+- For Wikipedia: { "x": 0, "y": 200, "width": 1920, "height": 880 }
+- For specific elements, use the selector parameter
+```
+
+```ruby [screenshot.json.jbuilder]
+json.name action_name
+json.description "Take a screenshot of the current page"
+json.parameters do
+ json.type "object"
+ json.properties do
+ json.filename do
+ json.type "string"
+ json.description "Name for the screenshot file"
+ end
+ json.full_page do
+ json.type "boolean"
+ json.description "Whether to capture the full page"
+ end
+ json.main_content_only do
+ json.type "boolean"
+ json.description "Auto-detect and crop to main content (default: true)"
+ end
+ json.selector do
+ json.type "string"
+ json.description "CSS selector for specific element"
+ end
+ json.area do
+ json.type "object"
+ json.description "Specific area to capture"
+ json.properties do
+ json.x { json.type "integer" }
+ json.y { json.type "integer" }
+ json.width { json.type "integer" }
+ json.height { json.type "integer" }
+ end
+ end
+ end
+end
+```
+
+:::
+
+## Basic Navigation Example
+
+The browser use agent can navigate to URLs and interact with pages using AI:
+
+```ruby
+response = BrowserAgent.prompt(
+ message: "Navigate to https://www.example.com and tell me what you see"
+).generate_now
+
+assert response.message.content.present?
+```
+
+::: details Navigation Response Example
+
+:::
+
+## AI-Driven Browser Control
+
+The browser use agent can use AI to determine which actions to take:
+
+```ruby
+response = BrowserAgent.prompt(
+ message: "Go to https://www.example.com and extract the main heading"
+).generate_now
+
+# Check that AI used the tools
+assert response.prompt.messages.any? { |m| m.role == :tool }
+assert response.message.content.present?
+```
+
+::: details AI Browser Response Example
+
+:::
+
+## Direct Action Usage
+
+You can also call browser actions directly without AI:
+
+```ruby
+# Call navigate action directly (synchronous execution)
+navigate_response = BrowserAgent.with(
+ url: "https://www.example.com"
+).navigate
+
+# The action returns a Generation object
+assert_kind_of ActiveAgent::Generation, navigate_response
+
+# Execute the generation
+result = navigate_response.generate_now
+
+assert result.message.content.include?("navigated")
+```
+
+::: details Direct Action Response Example
+
+:::
+
+## Wikipedia Research Example
+
+The browser use agent excels at research tasks, navigating between pages and gathering information:
+
+```ruby
+response = BrowserAgent.prompt(
+ message: "Research the Apollo 11 moon landing mission. Start at the main Wikipedia article, then:
+ 1) Extract the main content to get an overview
+ 2) Find and follow links to learn about the crew members
+ 3) Take screenshots of important pages
+ 4) Extract key dates and mission objectives
+ Please provide a comprehensive summary.",
+ url: "https://en.wikipedia.org/wiki/Apollo_11"
+).generate_now
+
+# The agent should navigate to Wikipedia and gather information
+assert response.message.content.present?
+assert response.message.content.downcase.include?("apollo") ||
+ response.message.content.downcase.include?("moon")
+
+# Check that multiple tools were used
+tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+assert tool_messages.any?, "Should have used tools"
+```
+
+::: details Wikipedia Research Response Example
+
+:::
+
+## Area Screenshot Example
+
+Take screenshots of specific page regions:
+
+```ruby
+response = BrowserAgent.prompt(
+ message: "Navigate to https://www.example.com and take a screenshot of just the header area (top 200 pixels)"
+).generate_now
+
+assert response.message.content.present?
+
+# Check that screenshot tool was used
+tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+assert tool_messages.any? { |m| m.content.include?("screenshot") }
+```
+
+::: details Area Screenshot Response Example
+
+:::
+
+## Main Content Auto-Cropping
+
+The browser use agent can automatically detect and crop to main content areas:
+
+```ruby
+response = BrowserAgent.prompt(
+ message: "Navigate to Wikipedia's Apollo 11 page and take a screenshot of the main content (should automatically exclude navigation/header)"
+).generate_now
+
+assert response.message.content.present?
+
+# Check that screenshot was taken
+tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+assert tool_messages.any? { |m| m.content.include?("screenshot") }
+```
+
+::: details Main Content Crop Response Example
+
+:::
+
+## Screenshot Capabilities
+
+The screenshot action provides multiple options for capturing page content:
+
+### Full Page Screenshot
+```ruby
+BrowserAgent.with(
+ url: "https://example.com"
+).navigate.generate_now
+
+BrowserAgent.new.screenshot(
+ filename: "full_page.png",
+ full_page: true
+)
+```
+
+### Area Screenshot
+```ruby
+BrowserAgent.new.screenshot(
+ filename: "header.png",
+ area: { x: 0, y: 0, width: 1920, height: 200 }
+)
+```
+
+### Element Screenshot
+```ruby
+BrowserAgent.new.screenshot(
+ filename: "content.png",
+ selector: "#main-content"
+)
+```
+
+### Auto-Crop to Main Content
+```ruby
+BrowserAgent.new.screenshot(
+ filename: "main.png",
+ main_content_only: true # Default behavior
+)
+```
+
+## Browser Configuration
+
+The browser runs in HD resolution (1920x1080) with headless Chrome:
+
+```ruby
+def setup_browser_if_needed
+ Capybara.register_driver :cuprite_agent do |app|
+ Capybara::Cuprite::Driver.new(
+ app,
+ window_size: [1920, 1080], # HD resolution
+ browser_options: {
+ "no-sandbox": nil,
+ "disable-gpu": nil,
+ "disable-dev-shm-usage": nil
+ },
+ inspector: false,
+ headless: true
+ )
+ end
+end
+```
+
+## Smart Content Detection
+
+The browser use agent includes intelligent content detection that:
+- Identifies main content areas using common selectors
+- Skips headers and navigation automatically
+- Adjusts cropping based on page structure
+- Falls back to sensible defaults
+
+Common selectors checked:
+- `main`, `[role='main']`
+- `#main-content`, `#content`
+- `article`
+- `#mw-content-text` (Wikipedia)
+- `.container` (Bootstrap)
+
+## Tips for Effective Use
+
+### Navigation Best Practices
+- Use `click` with text parameter for specific links
+- Extract main content before navigating away
+- Use `go_back` to return to previous pages
+- Take screenshots of important pages
+
+### Wikipedia Research
+- Use selector `#mw-content-text` for article content
+- Click directly on relevant links rather than extracting all links
+- Take screenshots with `main_content_only: true` to exclude navigation
+
+### Screenshot Optimization
+- Default `main_content_only: true` crops out headers automatically
+- Use area parameter for specific regions: `{ x: 0, y: 150, width: 1920, height: 930 }`
+- For Wikipedia, consider `y: 200` to skip navigation bars
+- Full page screenshots available with `full_page: true`
+
+## Integration with Rails
+
+The Browser Use Agent integrates seamlessly with Rails applications:
+
+```ruby
+class WebScraperController < ApplicationController
+ def scrape
+ response = BrowserAgent.prompt(
+ message: params[:instructions],
+ url: params[:url]
+ ).generate_now
+
+ render json: {
+ content: response.message.content,
+ screenshots: response.prompt.messages
+ .select { |m| m.role == :tool && m.content.include?("screenshot") }
+ .map { |m| m.content.match(/File: (.+?)\\n/)[1] }
+ }
+ end
+end
+```
+
+## Advanced Usage
+
+### Multi-Page Navigation Flow
+```ruby
+agent = BrowserAgent.new
+
+# Navigate to main page
+agent.navigate(url: "https://example.com")
+
+# Extract main content
+content = agent.extract_main_content
+
+# Click specific link
+agent.click(text: "Learn More")
+
+# Take screenshot of new page
+agent.screenshot(main_content_only: true)
+
+# Go back
+agent.go_back
+
+# Extract links for further exploration
+links = agent.extract_links(selector: "#main-content")
+```
+
+### Form Interaction
+```ruby
+agent = BrowserAgent.new
+agent.navigate(url: "https://example.com/form")
+agent.fill_form(field: "email", value: "test@example.com")
+agent.fill_form(field: "message", value: "Hello world")
+agent.click(text: "Submit")
+agent.screenshot(filename: "form_result.png")
+```
+
+## Requirements
+
+- **Cuprite** gem for Chrome automation
+- **Chrome** or **Chromium** browser installed
+- **Capybara** for browser session management
+
+Add to your Gemfile:
+```ruby
+gem 'cuprite'
+gem 'capybara'
+```
+
+## Conclusion
+
+The Browser Use Agent demonstrates ActiveAgent's flexibility in integrating with external tools while maintaining Rails conventions. Following the pattern of tools like Anthropic's Computer Use, it provides powerful browser automation capabilities driven by AI, making it ideal for:
+
+- Web scraping and data extraction
+- Automated testing and verification
+- Research and information gathering
+- Screenshot generation for documentation
+- Form submission and interaction
diff --git a/docs/examples/data-extraction-agent.md b/docs/examples/data-extraction-agent.md
new file mode 100644
index 00000000..80c238c3
--- /dev/null
+++ b/docs/examples/data-extraction-agent.md
@@ -0,0 +1,419 @@
+---
+title: Data Extraction Agent
+---
+# {{ $frontmatter.title }}
+
+Active Agent provides data extraction capabilities to parse structured data from unstructured text, images, or PDFs.
+
+## Setup
+
+Generate a data extraction agent:
+
+```bash
+rails generate active_agent:agent data_extraction parse_content
+```
+
+## Agent Implementation
+
+::: code-group
+
+```ruby [data_extraction_agent.rb]
+class DataExtractionAgent < ApplicationAgent
+ before_action :set_multimodal_content, only: [:parse_content]
+
+ def parse_content
+ prompt_args = {
+ message: params[:message] || "Parse the content of the file or image",
+ image_data: @image_data,
+ file_data: @file_data
+ }
+
+ if params[:response_format]
+ prompt_args[:response_format] = params[:response_format]
+ elsif params[:output_schema]
+ # Support legacy output_schema parameter
+ prompt_args[:response_format] = {
+ type: "json_schema",
+ json_schema: params[:output_schema]
+ }
+ end
+
+ prompt(**prompt_args)
+ end
+
+ def describe_cat_image
+ prompt(
+ message: "Describe the cat in the image",
+ image_data: CatImageService.fetch_base64_image
+ )
+ end
+
+ private
+
+ def set_multimodal_content
+ if params[:file_path].present?
+ @file_data ||= "data:application/pdf;base64,#{Base64.encode64(File.read(params[:file_path]))}"
+ elsif params[:image_path].present?
+ @image_data ||= "data:image/jpeg;base64,#{Base64.encode64(File.read(params[:image_path]))}"
+ end
+ end
+end
+```
+
+```json [chart_schema.json.erb]
+{
+ "format": {
+ "type": "json_schema",
+ "name": "chart_schema",
+ "schema": {
+ "type": "object",
+ "properties": {
+ "title": {
+ "type": "string",
+ "description": "The title of the chart."
+ },
+ "data_points": {
+ "type": "array",
+ "items": {
+ "$ref": "#/$defs/data_point"
+ }
+ }
+ },
+ "required": ["title", "data_points"],
+ "additionalProperties": false,
+ "$defs": {
+ "data_point": {
+ "type": "object",
+ "properties": {
+ "label": {
+ "type": "string",
+ "description": "The label for the data point."
+ },
+ "value": {
+ "type": "number",
+ "description": "The value of the data point."
+ }
+ },
+ "required": ["label", "value"],
+ "additionalProperties": false
+ }
+ }
+ }
+ }
+}
+```
+
+```json [resume_schema.json.erb]
+{
+ "format": {
+ "type": "json_schema",
+ "name": "resume_schema",
+ "schema": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "The full name of the individual."
+ },
+ "email": {
+ "type": "string",
+ "format": "email",
+ "description": "The email address of the individual."
+ },
+ "phone": {
+ "type": "string",
+ "description": "The phone number of the individual."
+ },
+ "education": {
+ "type": "array",
+ "items": {
+ "$ref": "#/$defs/education"
+ }
+ },
+ "experience": {
+ "type": "array",
+ "items": {
+ "$ref": "#/$defs/experience"
+ }
+ }
+ },
+ "required": ["name", "email", "phone", "education", "experience"],
+ "additionalProperties": false,
+ "$defs": {
+ "education": {
+ "type": "object",
+ "properties": {
+ "degree": {
+ "type": "string",
+ "description": "The degree obtained."
+ },
+ "institution": {
+ "type": "string",
+ "description": "The institution where the degree was obtained."
+ },
+ "year": {
+ "type": "integer",
+ "description": "The year of graduation."
+ }
+ },
+ "required": ["degree", "institution", "year"],
+ "additionalProperties": false
+ },
+ "experience": {
+ "type": "object",
+ "properties": {
+ "job_title": {
+ "type": "string",
+ "description": "The job title held."
+ },
+ "company": {
+ "type": "string",
+ "description": "The company where the individual worked."
+ },
+ "duration": {
+ "type": "string",
+ "description": "The duration of employment."
+ }
+ },
+ "required": ["job_title", "company", "duration"],
+ "additionalProperties": false
+ }
+ }
+ },
+ "strict": true
+ }
+}
+```
+
+:::
+
+## Basic Image Example
+
+### Image Description
+
+Active Agent can extract descriptions from images without structured output:
+
+```ruby
+prompt = DataExtractionAgent.describe_cat_image
+response = prompt.generate_now
+
+# The response contains a natural language description
+puts response.message.content
+# => "The cat in the image appears to have a primarily dark gray coat..."
+```
+
+::: details Basic Cat Image Response Example
+
+:::
+
+### Image: Parse Chart Data
+
+Active Agent can extract data from chart images:
+
+```ruby
+sales_chart_path = Rails.root.join("test", "fixtures", "images", "sales_chart.png")
+
+prompt = DataExtractionAgent.with(
+ image_path: sales_chart_path
+).parse_content
+
+response = prompt.generate_now
+
+# The response contains chart analysis
+puts response.message.content
+# => "The image is a bar chart titled 'Quarterly Sales Report'..."
+```
+
+::: details Basic Chart Image Response Example
+
+:::
+
+## Structured Output
+Active Agent supports structured output using JSON schemas. Define schemas in your agent's views directory (e.g., `app/views/agents/data_extraction/`) and reference them using `response_format: { type: "json_schema", json_schema: :schema_name }`. [Learn more about structured output →](/actions/structured-output)
+
+### Structured Output Schemas
+
+When using structured output:
+- The response will have `content_type` of `application/json`
+- The response content will be valid JSON matching your schema
+- Parse the response with `JSON.parse(response.message.content)`
+
+#### Generating Schemas from Models
+
+ActiveAgent provides a `SchemaGenerator` module that can automatically create JSON schemas from your ActiveRecord and ActiveModel classes. This makes it easy to ensure extracted data matches your application's data models.
+
+##### Basic Usage
+
+::: code-group
+<<< @/../test/schema_generator_test.rb#basic_user_model {ruby:line-numbers}
+<<< @/../test/schema_generator_test.rb#basic_schema_generation {ruby:line-numbers}
+:::
+
+The `to_json_schema` method generates a JSON schema from your model's attributes and validations.
+
+##### Schema with Validations
+
+Model validations are automatically included in the generated schema:
+
+<<< @/../test/schema_generator_test.rb#schema_with_validations {ruby:line-numbers}
+
+##### Strict Schema for Structured Output
+
+For use with AI providers that support structured output, generate a strict schema:
+
+::: code-group
+<<< @/../test/schema_generator_test.rb#blog_post_model {ruby:line-numbers}
+<<< @/../test/schema_generator_test.rb#strict_schema_generation {ruby:line-numbers}
+:::
+
+##### Using Generated Schemas in Agents
+
+Agents can use the schema generator to create structured output schemas dynamically:
+
+<<< @/../test/schema_generator_test.rb#agent_using_schema {ruby:line-numbers}
+
+This allows you to maintain a single source of truth for your data models and automatically generate schemas for AI extraction.
+
+::: info Provider Support
+Structured output requires a provider that supports JSON schemas. Currently supported providers include:
+- **OpenAI** - GPT-4o, GPT-4o-mini, GPT-3.5-turbo variants
+- **OpenRouter** - When using compatible models like OpenAI models through OpenRouter
+
+See the [OpenRouter Provider documentation](/providers/open-router-provider#structured-output-support) for details on using structured output with multiple model providers.
+:::
+
+
+### Parse Chart Image with Structured Output
+
+
+Extract chart data with a predefined schema `chart_schema`:
+
+```ruby
+sales_chart_path = Rails.root.join("test", "fixtures", "images", "sales_chart.png")
+
+prompt = DataExtractionAgent.with(
+ response_format: {
+ type: "json_schema",
+ json_schema: :chart_schema
+ },
+ image_path: sales_chart_path
+).parse_content
+
+response = prompt.generate_now
+
+# When using json_schema response_format, content is already parsed
+json_response = response.message.content
+
+puts json_response["title"]
+# => "Quarterly Sales Report"
+puts json_response["data_points"].first
+# => {"label"=>"Q1", "value"=>25000}
+```
+
+#### Response
+
+:::: tabs
+
+== Response Object
+```ruby
+response = prompt.generate_now
+# Response has parsed JSON content
+```
+::: details Generation Response Example
+
+:::
+== JSON Output
+
+```ruby
+# When using json_schema response_format, content is already parsed
+json_response = response.message.content
+```
+::: details Parse Chart JSON Response Example
+
+:::
+::::
+
+### Parse Resume with output resume schema
+
+Extract information from PDF resumes:
+
+```ruby
+sample_resume_path = Rails.root.join("test", "fixtures", "files", "sample_resume.pdf")
+
+prompt = DataExtractionAgent.with(
+ file_path: sample_resume_path
+).parse_content
+
+response = prompt.generate_now
+
+# When using json_schema response_format, content is auto-parsed
+puts response.message.content["name"]
+# => "John Doe"
+puts response.message.content["experience"].first["job_title"]
+# => "Senior Software Engineer"
+```
+
+#### Parse Resume with Structured Output
+[](https://docs.activeagents.ai/sample_resume.pdf)
+Extract resume data with a predefined `resume_schema`:
+
+:::: tabs
+
+== Prompt Generation
+
+```ruby
+prompt = DataExtractionAgent.with(
+ file_path: Rails.root.join("test", "fixtures", "files", "sample_resume.pdf")
+).parse_content
+
+response = prompt.generate_now
+```
+::: details Generation Response Example
+
+:::
+== JSON Output
+
+```ruby
+# When using json_schema response_format, content is already parsed
+json_response = response.message.content
+
+puts json_response["name"]
+# => "John Doe"
+puts json_response["email"]
+# => "john.doe@example.com"
+```
+::: details Parse Resume JSON Response Example
+
+:::
+::::
+
+## Advanced Examples
+
+### Receipt Data Extraction with OpenRouter
+
+For extracting data from receipts and invoices, you can use OpenRouter's multimodal capabilities combined with structured output. OpenRouter provides access to models that support both vision and structured output, making it ideal for document processing tasks.
+
+See the [OpenRouter Receipt Extraction example](/providers/open-router-provider#receipt-data-extraction-with-structured-output) for a complete implementation that extracts:
+- Merchant information (name, address)
+- Line items with prices
+- Tax and total amounts
+- Currency details
+
+### Using Different Providers
+
+The Data Extraction Agent can work with any provider that supports the required capabilities:
+
+- **For text extraction**: Any provider (OpenAI, Anthropic, Ollama, etc.)
+- **For image analysis**: Providers with vision models (OpenAI GPT-4o, Anthropic Claude 3, etc.)
+- **For structured output**: OpenAI models or OpenRouter with compatible models
+- **For PDF processing**: OpenRouter with PDF plugins or models with native PDF support
+
+::: tip Provider Selection
+Choose your provider based on your specific needs:
+- **OpenAI**: Best for structured output with GPT-4o/GPT-4o-mini
+- **OpenRouter**: Access to 200+ models with fallback support
+- **Anthropic**: Strong reasoning capabilities with Claude models
+- **Ollama**: Local model deployment for privacy-sensitive data
+
+Learn more about configuring providers in the [Providers Overview](/framework/providers).
+:::
diff --git a/docs/examples/mcp-integration-agent.md b/docs/examples/mcp-integration-agent.md
new file mode 100644
index 00000000..846abf50
--- /dev/null
+++ b/docs/examples/mcp-integration-agent.md
@@ -0,0 +1,607 @@
+---
+title: MCP Integration Agent
+---
+# {{ $frontmatter.title }}
+
+The Model Context Protocol (MCP) Integration Agent demonstrates how to connect ActiveAgent with external services and tools through MCP. MCP provides a standardized way to integrate with cloud storage, APIs, and custom services.
+
+## Overview
+
+MCP enables agents to:
+- Connect to cloud storage services (Dropbox, Google Drive, SharePoint, etc.)
+- Access external APIs and databases
+- Use custom MCP servers for specialized functionality
+- Combine multiple data sources in a single agent
+
+## Features
+
+- **Cloud Storage Connectors** - Pre-built connectors for popular services
+- **Custom MCP Servers** - Connect to your own MCP-compatible services
+- **Multi-Source Search** - Combine multiple MCP servers in one query
+- **Approval Workflows** - Control which operations require user approval
+- **Secure Authorization** - Handle authentication tokens securely
+
+## Setup
+
+Generate an MCP integration agent:
+
+```bash
+rails generate active_agent:agent mcp_integration search_cloud_storage
+```
+
+## Agent Implementation
+
+```ruby
+# Example agent demonstrating MCP (Model Context Protocol) integration
+# MCP allows connecting to external services and tools
+class McpIntegrationAgent < ApplicationAgent
+ generate_with :openai, model: "gpt-5" # Responses API required for MCP
+
+ # Use MCP connectors for cloud storage services
+ def search_cloud_storage
+ @query = params[:query]
+ @service = params[:service] || "dropbox"
+ @auth_token = params[:auth_token]
+
+ prompt(
+ message: "Search for: #{@query}",
+ options: {
+ use_responses_api: true,
+ tools: [build_connector_tool(@service, @auth_token)]
+ }
+ )
+ end
+
+ # Use custom MCP server for specialized functionality
+ def use_custom_mcp
+ @query = params[:query]
+ @server_url = params[:server_url]
+ @allowed_tools = params[:allowed_tools]
+
+ prompt(
+ message: @query,
+ options: {
+ use_responses_api: true,
+ tools: [
+ {
+ type: "mcp",
+ server_label: "Custom MCP Server",
+ server_url: @server_url,
+ server_description: "Custom MCP server for specialized tasks",
+ require_approval: "always", # Require approval for safety
+ allowed_tools: @allowed_tools
+ }
+ ]
+ }
+ )
+ end
+
+ # Combine multiple MCP servers for comprehensive search
+ def multi_source_search
+ @query = params[:query]
+ @sources = params[:sources] || ["github", "dropbox"]
+ @auth_tokens = params[:auth_tokens] || {}
+
+ tools = @sources.map do |source|
+ case source
+ when "github"
+ {
+ type: "mcp",
+ server_label: "GitHub",
+ server_url: "https://api.githubcopilot.com/mcp/",
+ server_description: "Search GitHub repositories",
+ require_approval: "never"
+ }
+ when "dropbox"
+ build_connector_tool("dropbox", @auth_tokens["dropbox"])
+ when "google_drive"
+ build_connector_tool("google_drive", @auth_tokens["google_drive"])
+ end
+ end.compact
+
+ prompt(
+ message: "Search across multiple sources: #{@query}",
+ options: {
+ use_responses_api: true,
+ tools: tools
+ }
+ )
+ end
+
+ # Use MCP with approval workflow
+ def sensitive_operation
+ @operation = params[:operation]
+ @mcp_config = params[:mcp_config]
+
+ prompt(
+ message: "Perform operation: #{@operation}",
+ options: {
+ use_responses_api: true,
+ tools: [
+ {
+ type: "mcp",
+ server_label: @mcp_config[:label],
+ server_url: @mcp_config[:url],
+ authorization: @mcp_config[:auth],
+ require_approval: {
+ never: {
+ tool_names: ["read", "search"] # Safe operations
+ }
+ }
+ # All other operations will require approval
+ }
+ ]
+ }
+ )
+ end
+
+ private
+
+ def build_connector_tool(service, auth_token)
+ connector_configs = {
+ "dropbox" => {
+ connector_id: "connector_dropbox",
+ label: "Dropbox"
+ },
+ "google_drive" => {
+ connector_id: "connector_googledrive",
+ label: "Google Drive"
+ },
+ "gmail" => {
+ connector_id: "connector_gmail",
+ label: "Gmail"
+ },
+ "sharepoint" => {
+ connector_id: "connector_sharepoint",
+ label: "SharePoint"
+ },
+ "outlook" => {
+ connector_id: "connector_outlookemail",
+ label: "Outlook Email"
+ }
+ }
+
+ config = connector_configs[service]
+ return nil unless config && auth_token
+
+ {
+ type: "mcp",
+ server_label: config[:label],
+ connector_id: config[:connector_id],
+ authorization: auth_token,
+ require_approval: "never" # Or configure based on your needs
+ }
+ end
+end
+```
+
+## Usage Examples
+
+### Search Cloud Storage
+
+Search for files across cloud storage services:
+
+```ruby
+response = McpIntegrationAgent.with(
+ query: "Q4 2024 financial report",
+ service: "dropbox",
+ auth_token: user.dropbox_token
+).search_cloud_storage.generate_now
+
+puts response.message.content
+# => Returns information about matching files in Dropbox
+```
+
+### Multi-Source Search
+
+Search across multiple services simultaneously:
+
+```ruby
+response = McpIntegrationAgent.with(
+ query: "project documentation",
+ sources: ["github", "google_drive", "dropbox"],
+ auth_tokens: {
+ "google_drive" => user.google_token,
+ "dropbox" => user.dropbox_token
+ }
+).multi_source_search.generate_now
+
+puts response.message.content
+# => Returns results from all three sources
+```
+
+### Custom MCP Server
+
+Connect to your own MCP-compatible service:
+
+```ruby
+response = McpIntegrationAgent.with(
+ query: "customer support tickets from last week",
+ server_url: "https://api.mycompany.com/mcp",
+ allowed_tools: ["search_tickets", "get_ticket_details"]
+).use_custom_mcp.generate_now
+
+puts response.message.content
+# => Returns ticket information from custom server
+```
+
+### Approval Workflow
+
+Control which operations require user approval:
+
+```ruby
+response = McpIntegrationAgent.with(
+ operation: "analyze sales data",
+ mcp_config: {
+ label: "Company Database",
+ url: "https://db.company.com/mcp",
+ auth: database_token
+ }
+).sensitive_operation.generate_now
+
+# Read operations execute automatically
+# Write/delete operations will require approval
+```
+
+## Cloud Storage Connectors
+
+### Supported Services
+
+ActiveAgent provides pre-built connectors for:
+
+- **Dropbox** - `connector_dropbox`
+- **Google Drive** - `connector_googledrive`
+- **Gmail** - `connector_gmail`
+- **SharePoint** - `connector_sharepoint`
+- **Outlook** - `connector_outlookemail`
+
+### Configuration
+
+Each connector requires:
+
+```ruby
+{
+ type: "mcp",
+ server_label: "Service Name", # Display name
+ connector_id: "connector_service", # Connector identifier
+ authorization: "user_auth_token", # User's OAuth token
+ require_approval: "never" # Approval policy
+}
+```
+
+### Authentication
+
+Handle OAuth tokens securely:
+
+```ruby
+class McpIntegrationAgent < ApplicationAgent
+ before_action :validate_tokens
+
+ private
+
+ def validate_tokens
+ service = params[:service]
+ token = params[:auth_token]
+
+ unless token.present? && valid_token?(token, service)
+ raise "Invalid or missing authentication token for #{service}"
+ end
+ end
+
+ def valid_token?(token, service)
+ # Verify token is valid and not expired
+ TokenValidator.valid?(token, service)
+ end
+end
+```
+
+## Custom MCP Servers
+
+### Server Configuration
+
+Connect to custom MCP-compatible servers:
+
+```ruby
+{
+ type: "mcp",
+ server_label: "My Custom Server",
+ server_url: "https://api.example.com/mcp/",
+ server_description: "Description of what this server does",
+ authorization: auth_token,
+ require_approval: "always", # or "never"
+ allowed_tools: ["tool1", "tool2"] # Optional: restrict available tools
+}
+```
+
+### Building Custom MCP Servers
+
+Your custom MCP server should implement:
+
+1. **Tool Discovery** - Endpoint to list available tools
+2. **Tool Execution** - Endpoint to execute tool calls
+3. **Authentication** - Support for authorization tokens
+4. **Error Handling** - Proper error responses
+
+Example server structure:
+
+```ruby
+# lib/mcp_server.rb
+class McpServer
+ def tools
+ [
+ {
+ name: "search_database",
+ description: "Search the company database",
+ parameters: {
+ type: "object",
+ properties: {
+ query: { type: "string", description: "Search query" },
+ limit: { type: "integer", description: "Max results" }
+ },
+ required: ["query"]
+ }
+ }
+ ]
+ end
+
+ def execute_tool(name, params)
+ case name
+ when "search_database"
+ Database.search(params["query"], limit: params["limit"])
+ else
+ { error: "Unknown tool: #{name}" }
+ end
+ end
+end
+```
+
+## Approval Workflows
+
+### Approval Policies
+
+Control which operations require user approval:
+
+```ruby
+{
+ require_approval: {
+ never: {
+ tool_names: ["read", "search", "list"] # Safe read-only operations
+ }
+ # All other operations will require approval
+ }
+}
+```
+
+### Approval Options
+
+- **"never"** - All operations execute automatically (use with caution)
+- **"always"** - All operations require approval (safest)
+- **Custom** - Specify which tools don't require approval
+
+### Implementing Approval UI
+
+```ruby
+class McpApprovalsController < ApplicationController
+ def show
+ @approval_request = ApprovalRequest.find(params[:id])
+ end
+
+ def approve
+ approval = ApprovalRequest.find(params[:id])
+ approval.approve!
+
+ # Resume agent execution
+ agent = McpIntegrationAgent.new
+ response = agent.resume_with_approval(approval)
+
+ redirect_to chat_path, notice: "Operation approved"
+ end
+
+ def reject
+ approval = ApprovalRequest.find(params[:id])
+ approval.reject!
+
+ redirect_to chat_path, notice: "Operation rejected"
+ end
+end
+```
+
+## Security Considerations
+
+### Token Management
+
+Store and handle authentication tokens securely:
+
+```ruby
+class User < ApplicationRecord
+ # Encrypt tokens at rest
+ encrypts :dropbox_token
+ encrypts :google_token
+
+ # Validate token before use
+ def valid_dropbox_token?
+ return false unless dropbox_token.present?
+ return false if dropbox_token_expired?
+ true
+ end
+
+ def dropbox_token_expired?
+ # Check token expiration
+ dropbox_token_expires_at < Time.current
+ end
+end
+```
+
+### Scope Limiting
+
+Request only necessary permissions:
+
+```ruby
+# config/initializers/oauth.rb
+DROPBOX_SCOPES = %w[
+ files.metadata.read
+ files.content.read
+ # Don't request write/delete unless needed
+].freeze
+```
+
+### Audit Logging
+
+Log all MCP operations for security auditing:
+
+```ruby
+class McpIntegrationAgent < ApplicationAgent
+ after_action :log_mcp_operation
+
+ private
+
+ def log_mcp_operation
+ McpAuditLog.create!(
+ user_id: params[:user_id],
+ service: params[:service],
+ operation: action_name,
+ query: params[:query],
+ timestamp: Time.current
+ )
+ end
+end
+```
+
+## Integration Patterns
+
+### User-Scoped Tokens
+
+Use per-user authentication:
+
+```ruby
+class ChatController < ApplicationController
+ def message
+ response = McpIntegrationAgent.with(
+ query: params[:query],
+ service: params[:service],
+ auth_token: current_user.token_for(params[:service])
+ ).search_cloud_storage.generate_now
+
+ render json: { message: response.message.content }
+ end
+end
+```
+
+### Service Discovery
+
+Dynamically discover available services:
+
+```ruby
+class McpIntegrationAgent < ApplicationAgent
+ def available_services
+ user = User.find(params[:user_id])
+
+ services = []
+ services << "dropbox" if user.dropbox_token.present?
+ services << "google_drive" if user.google_token.present?
+ services << "github" if user.github_token.present?
+
+ services
+ end
+end
+```
+
+### Fallback Handling
+
+Handle service unavailability gracefully:
+
+```ruby
+def multi_source_search
+ tools = @sources.map do |source|
+ build_connector_tool(source, @auth_tokens[source])
+ end.compact # Remove nil values for unavailable services
+
+ if tools.empty?
+ raise "No services available. Please connect at least one service."
+ end
+
+ prompt(
+ message: @query,
+ options: {
+ use_responses_api: true,
+ tools: tools
+ }
+ )
+end
+```
+
+## Testing
+
+### Mock MCP Responses
+
+```ruby
+class McpIntegrationAgentTest < ActiveSupport::TestCase
+ test "searches cloud storage" do
+ VCR.use_cassette("mcp_dropbox_search") do
+ response = McpIntegrationAgent.with(
+ query: "test file",
+ service: "dropbox",
+ auth_token: "test_token"
+ ).search_cloud_storage.generate_now
+
+ assert response.message.content.present?
+ end
+ end
+
+ test "combines multiple sources" do
+ response = McpIntegrationAgent.with(
+ query: "documentation",
+ sources: ["github"],
+ auth_tokens: {}
+ ).multi_source_search.generate_now
+
+ assert response.message.content.present?
+ end
+end
+```
+
+## Rate Limiting
+
+Implement rate limiting for MCP operations:
+
+```ruby
+class McpIntegrationAgent < ApplicationAgent
+ before_action :check_mcp_rate_limit
+
+ private
+
+ def check_mcp_rate_limit
+ service = params[:service]
+ user_id = params[:user_id]
+ key = "mcp_rate:#{user_id}:#{service}"
+
+ count = Rails.cache.increment(key, 1, expires_in: 1.hour)
+
+ if count > 50 # 50 requests per hour per service
+ raise "Rate limit exceeded for #{service}"
+ end
+ end
+end
+```
+
+## Provider Requirements
+
+MCP integration requires:
+
+- **Responses API** - MCP is only available with Responses API
+- **Compatible Models** - gpt-5 or newer
+- **OpenAI Provider** - Currently OpenAI-specific
+
+```ruby
+# config/application.rb
+config.active_agent.providers = {
+ openai: {
+ api_key: ENV["OPENAI_API_KEY"],
+ use_responses_api: true # Required for MCP
+ }
+}
+```
+
+## Conclusion
+
+The MCP Integration Agent demonstrates how to connect ActiveAgent with external services and data sources. Whether using pre-built cloud storage connectors or custom MCP servers, MCP provides a standardized, secure way to extend your agents' capabilities beyond their training data and into your organization's systems and data.
diff --git a/docs/examples/research-agent.md b/docs/examples/research-agent.md
new file mode 100644
index 00000000..805612d2
--- /dev/null
+++ b/docs/examples/research-agent.md
@@ -0,0 +1,672 @@
+---
+title: Research Agent
+---
+# {{ $frontmatter.title }}
+
+The Research Agent demonstrates how to build agents that combine multiple tools and data sources for comprehensive research tasks. It shows integration with web search, MCP servers, and image generation to create powerful research workflows.
+
+## Overview
+
+The Research Agent showcases:
+- **Multi-Tool Integration** - Combining web search, MCP, and image generation
+- **Concern-Based Architecture** - Using concerns to share research functionality
+- **Configurable Tools** - Dynamic tool configuration based on research needs
+- **Academic Sources** - Integration with ArXiv, PubMed, and other research databases
+
+## Features
+
+- **Web Search Integration** - Access current information via web search
+- **MCP Server Support** - Connect to academic databases (ArXiv, GitHub, PubMed)
+- **Image Generation** - Create visualizations for research findings
+- **Configurable Depth** - Adjust research comprehensiveness (quick vs. detailed)
+- **Literature Review** - Specialized action for academic research
+- **Source Citation** - Track and cite research sources
+
+## Setup
+
+Generate a research agent:
+
+```bash
+rails generate active_agent:agent research comprehensive_research literature_review
+```
+
+## Agent Implementation
+
+```ruby
+class ResearchAgent < ApplicationAgent
+ include ResearchTools
+
+ # Configure the agent to use OpenAI with specific settings
+ generate_with :openai, model: "gpt-4o"
+
+ # Configure research tools at the class level
+ configure_research_tools(
+ enable_web_search: true,
+ mcp_servers: ["arxiv", "github"],
+ default_search_context: "high"
+ )
+
+ # Agent-specific action that uses both concern tools and custom logic
+ def comprehensive_research
+ @topic = params[:topic]
+ @depth = params[:depth] || "detailed"
+
+ # This action combines multiple tools
+ prompt(
+ message: "Conduct comprehensive research on: #{@topic}",
+ tools: build_comprehensive_tools
+ )
+ end
+
+ def literature_review
+ @topic = params[:topic]
+ @sources = params[:sources] || ["arxiv", "pubmed"]
+
+ # Use the concern's search_with_mcp_sources internally
+ mcp_tools = build_mcp_tools(@sources)
+
+ prompt(
+ message: "Conduct a literature review on: #{@topic}\nFocus on peer-reviewed sources from the last 5 years.",
+ tools: [
+ { type: "web_search_preview", search_context_size: "high" },
+ *mcp_tools
+ ]
+ )
+ end
+
+ private
+
+ def build_comprehensive_tools
+ tools = []
+
+ # Add web search for general information
+ tools << {
+ type: "web_search_preview",
+ search_context_size: @depth == "detailed" ? "high" : "medium"
+ }
+
+ # Add MCP servers from configuration
+ if research_tools_config[:mcp_servers]
+ tools.concat(build_mcp_tools(research_tools_config[:mcp_servers]))
+ end
+
+ # Add image generation for visualizations
+ if @depth == "detailed"
+ tools << {
+ type: "image_generation",
+ size: "1024x1024",
+ quality: "high"
+ }
+ end
+
+ tools
+ end
+
+ def build_mcp_tools(sources)
+ sources.map do |source|
+ {
+ type: "mcp",
+ server_label: source.titleize,
+ server_url: mcp_server_url(source)
+ }
+ end
+ end
+
+ def mcp_server_url(source)
+ # Map source names to MCP server URLs
+ urls = {
+ "arxiv" => "https://api.arxiv.org/mcp/",
+ "github" => "https://api.githubcopilot.com/mcp/",
+ "pubmed" => "https://api.pubmed.gov/mcp/"
+ }
+ urls[source]
+ end
+
+ def research_tools_config
+ self.class.research_tools_config || {}
+ end
+end
+```
+
+### Research Tools Concern
+
+Share research functionality across agents:
+
+```ruby
+# app/agents/concerns/research_tools.rb
+module ResearchTools
+ extend ActiveSupport::Concern
+
+ class_methods do
+ def configure_research_tools(config = {})
+ @research_tools_config = config
+ end
+
+ def research_tools_config
+ @research_tools_config || {}
+ end
+ end
+
+ def search_academic_papers
+ @query = params[:query]
+ @sources = params[:sources] || ["arxiv"]
+
+ prompt(
+ message: "Search for academic papers about: #{@query}",
+ tools: build_mcp_tools(@sources)
+ )
+ end
+
+ def analyze_research_data
+ @data = params[:data]
+ @analysis_type = params[:analysis_type] || "statistical"
+
+ prompt(
+ message: "Analyze the following research data using #{@analysis_type} methods:\n\n#{@data}"
+ )
+ end
+
+ def generate_research_visualization
+ @topic = params[:topic]
+ @style = params[:style] || "infographic"
+
+ prompt(
+ message: "Create a #{@style} visualization for: #{@topic}",
+ tools: [
+ {
+ type: "image_generation",
+ size: "1024x1024",
+ quality: "high"
+ }
+ ]
+ )
+ end
+end
+```
+
+## Usage Examples
+
+### Comprehensive Research
+
+Conduct multi-source research on a topic:
+
+```ruby
+response = ResearchAgent.with(
+ topic: "quantum computing advances in 2025",
+ depth: "detailed"
+).comprehensive_research.generate_now
+
+puts response.message.content
+# => Returns comprehensive research with web sources, academic papers, and visualizations
+```
+
+### Quick Research
+
+For faster, less comprehensive research:
+
+```ruby
+response = ResearchAgent.with(
+ topic: "Ruby on Rails 8 features",
+ depth: "quick"
+).comprehensive_research.generate_now
+
+puts response.message.content
+# => Returns focused research with medium-depth web search
+```
+
+### Literature Review
+
+Focus on academic sources for scholarly research:
+
+```ruby
+response = ResearchAgent.with(
+ topic: "machine learning in healthcare",
+ sources: ["arxiv", "pubmed"]
+).literature_review.generate_now
+
+puts response.message.content
+# => Returns peer-reviewed research from ArXiv and PubMed
+```
+
+### Custom Sources
+
+Specify specific research databases:
+
+```ruby
+response = ResearchAgent.with(
+ topic: "climate change models",
+ sources: ["arxiv", "github"] # Academic papers + code repositories
+).literature_review.generate_now
+
+puts response.message.content
+# => Combines academic papers with open-source implementations
+```
+
+## Research Tools Configuration
+
+### Class-Level Configuration
+
+Configure default research settings:
+
+```ruby
+class ResearchAgent < ApplicationAgent
+ configure_research_tools(
+ enable_web_search: true,
+ mcp_servers: ["arxiv", "github", "pubmed"],
+ default_search_context: "high",
+ enable_visualizations: true
+ )
+end
+```
+
+### Runtime Configuration
+
+Override defaults for specific requests:
+
+```ruby
+response = ResearchAgent.with(
+ topic: "topic",
+ depth: "detailed", # Override default depth
+ sources: ["arxiv"] # Override default MCP servers
+).comprehensive_research.generate_now
+```
+
+## Tool Combinations
+
+### Web Search + Academic Sources
+
+Combine current information with peer-reviewed research:
+
+```ruby
+tools = [
+ { type: "web_search_preview", search_context_size: "high" },
+ { type: "mcp", server_label: "ArXiv", server_url: "..." },
+ { type: "mcp", server_label: "PubMed", server_url: "..." }
+]
+
+prompt(message: "Research topic", tools: tools)
+```
+
+### Research + Visualization
+
+Include image generation for data visualization:
+
+```ruby
+tools = [
+ { type: "web_search_preview", search_context_size: "high" },
+ { type: "image_generation", size: "1024x1024", quality: "high" }
+]
+
+prompt(
+ message: "Research #{topic} and create an infographic",
+ tools: tools
+)
+```
+
+### GitHub + Academic Papers
+
+Combine theory with practical implementations:
+
+```ruby
+tools = [
+ { type: "mcp", server_label: "ArXiv", server_url: "..." }, # Papers
+ { type: "mcp", server_label: "GitHub", server_url: "..." } # Code
+]
+
+prompt(
+ message: "Find papers and implementations for #{algorithm}",
+ tools: tools
+)
+```
+
+## Academic Source Integration
+
+### ArXiv Integration
+
+Search academic papers on ArXiv:
+
+```ruby
+{
+ type: "mcp",
+ server_label: "ArXiv",
+ server_url: "https://api.arxiv.org/mcp/",
+ server_description: "Academic papers in physics, math, CS, and more"
+}
+```
+
+### PubMed Integration
+
+Access medical and life sciences research:
+
+```ruby
+{
+ type: "mcp",
+ server_label: "PubMed",
+ server_url: "https://api.pubmed.gov/mcp/",
+ server_description: "Biomedical literature database"
+}
+```
+
+### GitHub Integration
+
+Find open-source implementations:
+
+```ruby
+{
+ type: "mcp",
+ server_label: "GitHub",
+ server_url: "https://api.githubcopilot.com/mcp/",
+ server_description: "Code repositories and implementations"
+}
+```
+
+## Using Concerns for Shared Functionality
+
+### Creating a Research Concern
+
+```ruby
+# app/agents/concerns/research_tools.rb
+module ResearchTools
+ extend ActiveSupport::Concern
+
+ included do
+ class_attribute :research_tools_config, default: {}
+ end
+
+ class_methods do
+ def configure_research_tools(config = {})
+ self.research_tools_config = config
+ end
+ end
+
+ # Shared research actions
+ def search_papers
+ # Implementation
+ end
+
+ def analyze_data
+ # Implementation
+ end
+end
+```
+
+### Using the Concern
+
+```ruby
+class ResearchAgent < ApplicationAgent
+ include ResearchTools
+
+ configure_research_tools(
+ enable_web_search: true,
+ mcp_servers: ["arxiv"]
+ )
+end
+
+class AcademicAgent < ApplicationAgent
+ include ResearchTools
+
+ configure_research_tools(
+ enable_web_search: false,
+ mcp_servers: ["arxiv", "pubmed"]
+ )
+end
+```
+
+## Integration Patterns
+
+### Controller Integration
+
+Use research agents in your application:
+
+```ruby
+class ResearchController < ApplicationController
+ def research
+ response = ResearchAgent.with(
+ topic: params[:topic],
+ depth: params[:depth] || "detailed"
+ ).comprehensive_research.generate_now
+
+ render json: {
+ topic: params[:topic],
+ findings: response.message.content,
+ sources: extract_sources(response)
+ }
+ end
+
+ private
+
+ def extract_sources(response)
+ # Extract citations and sources from response
+ response.message.content.scan(/\[(\d+)\]/).flatten
+ end
+end
+```
+
+### Background Jobs
+
+Process research asynchronously:
+
+```ruby
+class ResearchJob < ApplicationJob
+ queue_as :default
+
+ def perform(topic, user_id)
+ response = ResearchAgent.with(
+ topic: topic,
+ depth: "detailed"
+ ).comprehensive_research.generate_now
+
+ # Save results
+ ResearchResult.create!(
+ user_id: user_id,
+ topic: topic,
+ findings: response.message.content
+ )
+
+ # Notify user
+ UserMailer.research_complete(user_id, topic).deliver_later
+ end
+end
+```
+
+### Caching Research
+
+Cache expensive research operations:
+
+```ruby
+class ResearchAgent < ApplicationAgent
+ def cached_research
+ @topic = params[:topic]
+ cache_key = "research:#{Digest::MD5.hexdigest(@topic)}"
+
+ Rails.cache.fetch(cache_key, expires_in: 24.hours) do
+ comprehensive_research.generate_now
+ end
+ end
+end
+```
+
+## Advanced Features
+
+### Progressive Research
+
+Build up research incrementally:
+
+```ruby
+def progressive_research
+ @topic = params[:topic]
+ results = []
+
+ # Step 1: Quick web search
+ results << quick_search(@topic)
+
+ # Step 2: Academic papers
+ results << search_papers(@topic)
+
+ # Step 3: Code examples
+ results << search_code(@topic)
+
+ # Step 4: Synthesize findings
+ synthesize_results(results)
+end
+```
+
+### Source Prioritization
+
+Prioritize certain sources:
+
+```ruby
+def prioritized_research
+ @topic = params[:topic]
+
+ # Try academic sources first
+ response = search_academic_only(@topic)
+
+ # Fall back to web search if insufficient
+ if response.confidence < 0.7
+ response = add_web_search(response, @topic)
+ end
+
+ response
+end
+```
+
+### Citation Extraction
+
+Extract and format citations:
+
+```ruby
+def extract_citations(response)
+ citations = []
+
+ response.prompt.messages.each do |message|
+ next unless message.role == :tool
+ next unless message.content.include?("arxiv") || message.content.include?("pubmed")
+
+ citations << parse_citation(message.content)
+ end
+
+ citations
+end
+
+def parse_citation(content)
+ # Extract title, authors, date, DOI, etc.
+ {
+ title: extract_title(content),
+ authors: extract_authors(content),
+ year: extract_year(content),
+ doi: extract_doi(content)
+ }
+end
+```
+
+## Testing
+
+### Test Research Workflow
+
+```ruby
+class ResearchAgentTest < ActiveSupport::TestCase
+ test "conducts comprehensive research" do
+ VCR.use_cassette("research_comprehensive") do
+ response = ResearchAgent.with(
+ topic: "test topic",
+ depth: "detailed"
+ ).comprehensive_research.generate_now
+
+ assert response.message.content.present?
+ assert response.message.content.length > 500 # Substantial content
+ end
+ end
+
+ test "performs literature review" do
+ response = ResearchAgent.with(
+ topic: "machine learning",
+ sources: ["arxiv"]
+ ).literature_review.generate_now
+
+ assert response.message.content.present?
+ # Check that academic sources were used
+ tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+ assert tool_messages.any? { |m| m.content.include?("arxiv") }
+ end
+end
+```
+
+### Mock External Services
+
+```ruby
+class ResearchAgentTest < ActiveSupport::TestCase
+ setup do
+ @mock_arxiv_response = {
+ papers: [
+ { title: "Test Paper", authors: "Author", year: 2025 }
+ ]
+ }
+ end
+
+ test "handles mock MCP responses" do
+ # Mock MCP server responses
+ stub_request(:post, "https://api.arxiv.org/mcp/")
+ .to_return(body: @mock_arxiv_response.to_json)
+
+ response = ResearchAgent.with(
+ topic: "test",
+ sources: ["arxiv"]
+ ).literature_review.generate_now
+
+ assert response.message.content.include?("Test Paper")
+ end
+end
+```
+
+## Best Practices
+
+### Source Selection
+
+Choose appropriate sources for your research:
+
+- **ArXiv**: Physics, mathematics, computer science
+- **PubMed**: Medical and life sciences
+- **GitHub**: Code implementations and examples
+- **Web Search**: Current events and general information
+
+### Depth Configuration
+
+Balance comprehensiveness with speed:
+
+```ruby
+# Quick research (< 30 seconds)
+depth: "quick" # Medium web search, no visualizations
+
+# Standard research (30-60 seconds)
+depth: "standard" # High web search, basic MCP
+
+# Detailed research (1-2 minutes)
+depth: "detailed" # High web search, multiple MCP, visualizations
+```
+
+### Result Validation
+
+Validate research quality:
+
+```ruby
+def validate_research(response)
+ content = response.message.content
+
+ # Check for minimum content length
+ return false if content.length < 500
+
+ # Check for citations
+ return false unless content.include?("[") && content.include?("]")
+
+ # Check for multiple sources
+ tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+ return false if tool_messages.length < 2
+
+ true
+end
+```
+
+## Conclusion
+
+The Research Agent demonstrates how to build sophisticated research workflows by combining multiple tools and data sources. Through concern-based architecture and configurable tool selection, it provides a flexible foundation for academic research, technical investigations, and comprehensive information gathering tasks.
diff --git a/docs/examples/support-agent.md b/docs/examples/support-agent.md
new file mode 100644
index 00000000..4a6e2bc8
--- /dev/null
+++ b/docs/examples/support-agent.md
@@ -0,0 +1,450 @@
+---
+title: Support Agent
+---
+# {{ $frontmatter.title }}
+
+The Support Agent is a simple example demonstrating core ActiveAgent concepts including tool calling, message context, and multimodal responses. It serves as a reference implementation for building customer support chatbots.
+
+## Overview
+
+The Support Agent demonstrates:
+- Basic agent setup with instructions
+- Tool calling (action methods as tools)
+- Message context and conversation flow
+- Multimodal responses (text and images)
+
+## Features
+
+- **Simple Configuration** - Minimal setup with clear instructions
+- **Tool Integration** - Agent actions become available as AI tools
+- **Message Context** - Access complete conversation history
+- **Multimodal Support** - Return images and other content types
+
+## Setup
+
+Generate a support agent:
+
+```bash
+rails generate active_agent:agent support get_cat_image
+```
+
+## Agent Implementation
+
+```ruby
+class SupportAgent < ApplicationAgent
+ generate_with :openai,
+ model: "gpt-4o-mini",
+ instructions: "You're a support agent. Your job is to help users with their questions."
+
+ def get_cat_image
+ prompt(content_type: "image_url", context_id: params[:context_id]) do |format|
+ format.text { render plain: CatImageService.fetch_image_url }
+ end
+ end
+end
+```
+
+## Usage Examples
+
+### Basic Prompt
+
+Send a simple message to the agent:
+
+```ruby
+prompt = SupportAgent.prompt(message: "Hello, I need help")
+
+puts prompt.message.content
+# => "Hello, I need help"
+
+response = prompt.generate_now
+
+puts response.message.content
+# => "Hello! I'm here to help you. What can I assist you with today?"
+```
+
+### Tool Calling
+
+The agent can call its defined actions as tools:
+
+```ruby
+message = "Show me a cat"
+prompt = SupportAgent.prompt(message: message)
+
+response = prompt.generate_now
+
+# The agent will call the get_cat_image action
+puts response.message.content
+# => "Here's a cute cat for you! [image displayed]"
+```
+
+### Message Context
+
+Access the complete conversation history:
+
+```ruby
+response = SupportAgent.prompt(message: "Show me a cat").generate_now
+
+# Messages include system, user, assistant, and tool messages
+puts response.prompt.messages.size
+# => 5+ messages
+
+# Group messages by role
+system_messages = response.prompt.messages.select { |m| m.role == :system }
+user_messages = response.prompt.messages.select { |m| m.role == :user }
+assistant_messages = response.prompt.messages.select { |m| m.role == :assistant }
+tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+
+# System message contains agent instructions
+puts system_messages.first.content
+# => "You're a support agent. Your job is to help users with their questions."
+
+# The response message is the last message in the context
+puts response.message == response.prompt.messages.last
+# => true
+```
+
+### Inspecting Tool Messages
+
+Tool messages contain the results of action calls:
+
+```ruby
+response = SupportAgent.prompt(message: "Show me a cat").generate_now
+
+tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+
+puts tool_messages.first.content
+# => Contains the cat image URL: "https://cataas.com/cat/..."
+
+# Assistant messages with requested_actions indicate tool calls
+assistant_with_actions = response.prompt.messages.find do |m|
+ m.role == :assistant && m.requested_actions&.any?
+end
+
+puts assistant_with_actions.requested_actions.first.name
+# => "get_cat_image"
+```
+
+## Understanding Message Flow
+
+### Message Roles
+
+ActiveAgent uses different message roles for conversation context:
+
+1. **System** - Agent instructions and configuration
+2. **User** - User's input messages
+3. **Assistant** - AI-generated responses
+4. **Tool** - Results from action/tool calls
+
+### Conversation Example
+
+```ruby
+response = SupportAgent.prompt(message: "Show me a cat").generate_now
+
+response.prompt.messages.each do |message|
+ puts "#{message.role}: #{message.content[0..50]}..."
+end
+
+# Output:
+# system: You're a support agent. Your job is to help...
+# user: Show me a cat
+# assistant: [tool_call: get_cat_image]
+# tool: https://cataas.com/cat/...
+# assistant: Here's a cute cat for you!
+```
+
+## Multimodal Responses
+
+### Returning Images
+
+The `get_cat_image` action demonstrates multimodal responses:
+
+```ruby
+def get_cat_image
+ prompt(
+ content_type: "image_url", # Specify content type
+ context_id: params[:context_id] # Maintain conversation context
+ ) do |format|
+ format.text { render plain: CatImageService.fetch_image_url }
+ end
+end
+```
+
+### Custom Content Types
+
+Support different response formats:
+
+```ruby
+class SupportAgent < ApplicationAgent
+ def fetch_document
+ prompt(content_type: "application/pdf") do |format|
+ format.text { render plain: document_url }
+ end
+ end
+
+ def get_json_data
+ prompt(content_type: "application/json") do |format|
+ format.text { render json: { status: "success", data: fetch_data } }
+ end
+ end
+end
+```
+
+## Streaming Responses
+
+Enable streaming for real-time responses:
+
+```ruby
+prompt = SupportAgent.prompt(message: "Tell me a long story")
+
+prompt.generate_now do |chunk|
+ print chunk # Stream each chunk as it arrives
+end
+```
+
+## Adding More Actions
+
+Extend the support agent with additional tools:
+
+```ruby
+class SupportAgent < ApplicationAgent
+ generate_with :openai,
+ model: "gpt-4o-mini",
+ instructions: "You're a support agent. Your job is to help users."
+
+ # Look up order status
+ def check_order_status
+ @order_id = params[:order_id]
+ order = Order.find_by(id: @order_id)
+
+ prompt do |format|
+ format.text do
+ render plain: "Order ##{@order_id}: #{order.status}"
+ end
+ end
+ end
+
+ # Search knowledge base
+ def search_kb
+ @query = params[:query]
+ articles = KnowledgeBase.search(@query).limit(5)
+
+ prompt do |format|
+ format.text do
+ render plain: articles.map(&:title).join("\n")
+ end
+ end
+ end
+
+ # Get cat image
+ def get_cat_image
+ prompt(content_type: "image_url") do |format|
+ format.text { render plain: CatImageService.fetch_image_url }
+ end
+ end
+end
+```
+
+## Integration with Rails
+
+### Controller Integration
+
+Use the support agent in a controller:
+
+```ruby
+class ChatController < ApplicationController
+ def message
+ response = SupportAgent.prompt(
+ message: params[:message],
+ context_id: session[:chat_context_id]
+ ).generate_now
+
+ # Save context for multi-turn conversations
+ session[:chat_context_id] = response.prompt.id
+
+ render json: {
+ message: response.message.content,
+ context_id: response.prompt.id
+ }
+ end
+end
+```
+
+### WebSocket Integration
+
+Stream responses via WebSocket:
+
+```ruby
+class ChatChannel < ApplicationCable::Channel
+ def message(data)
+ prompt = SupportAgent.prompt(message: data["message"])
+
+ prompt.generate_now do |chunk|
+ transmit({ chunk: chunk })
+ end
+ end
+end
+```
+
+## Testing
+
+### Test Agent Behavior
+
+```ruby
+class SupportAgentTest < ActiveSupport::TestCase
+ test "agent responds to greetings" do
+ response = SupportAgent.prompt(message: "Hello").generate_now
+
+ assert response.message.content.present?
+ assert_match(/hello|hi|greet/i, response.message.content)
+ end
+
+ test "agent calls get_cat_image tool" do
+ response = SupportAgent.prompt(message: "Show me a cat").generate_now
+
+ # Check that tool was called
+ tool_messages = response.prompt.messages.select { |m| m.role == :tool }
+ assert tool_messages.any?
+
+ # Check that response mentions the cat
+ assert response.message.content.present?
+ end
+end
+```
+
+### Mock Tool Responses
+
+Mock external services in tests:
+
+```ruby
+class SupportAgentTest < ActiveSupport::TestCase
+ setup do
+ CatImageService.stub :fetch_image_url, "https://example.com/cat.jpg" do
+ @response = SupportAgent.prompt(message: "Show me a cat").generate_now
+ end
+ end
+
+ test "returns mocked cat image" do
+ tool_messages = @response.prompt.messages.select { |m| m.role == :tool }
+ assert_includes tool_messages.first.content, "example.com/cat.jpg"
+ end
+end
+```
+
+## Configuration Options
+
+### Model Selection
+
+Choose appropriate models for your use case:
+
+```ruby
+class SupportAgent < ApplicationAgent
+ # Fast and economical for simple support
+ generate_with :openai, model: "gpt-4o-mini"
+
+ # More capable for complex queries
+ # generate_with :openai, model: "gpt-4o"
+
+ # Maximum capability for advanced support
+ # generate_with :openai, model: "gpt-5"
+end
+```
+
+### Custom Instructions
+
+Tailor agent behavior with specific instructions:
+
+```ruby
+class SupportAgent < ApplicationAgent
+ generate_with :openai,
+ model: "gpt-4o-mini",
+ instructions: <<~INSTRUCTIONS
+ You're a technical support agent for Acme Corp.
+
+ Guidelines:
+ - Always be polite and professional
+ - Ask clarifying questions when needed
+ - Provide step-by-step solutions
+ - Escalate to human support for billing issues
+
+ Available tools:
+ - check_order_status: Look up order information
+ - search_kb: Search knowledge base
+ - get_cat_image: Send a cat image (for fun)
+ INSTRUCTIONS
+end
+```
+
+## Best Practices
+
+### Context Management
+
+Maintain conversation context across turns:
+
+```ruby
+# First message
+response1 = SupportAgent.prompt(
+ message: "I have a problem",
+ context_id: user_session_id
+).generate_now
+
+# Follow-up message uses same context
+response2 = SupportAgent.prompt(
+ message: "Can you explain more?",
+ context_id: user_session_id
+).generate_now
+
+# Both responses share the same conversation history
+```
+
+### Error Handling
+
+Handle errors gracefully:
+
+```ruby
+def check_order_status
+ @order_id = params[:order_id]
+ order = Order.find_by(id: @order_id)
+
+ prompt do |format|
+ format.text do
+ if order
+ render plain: "Order ##{@order_id}: #{order.status}"
+ else
+ render plain: "Order not found. Please check the order number."
+ end
+ end
+ end
+rescue => e
+ prompt do |format|
+ format.text do
+ render plain: "Error checking order: #{e.message}"
+ end
+ end
+end
+```
+
+### Rate Limiting
+
+Implement rate limiting for production:
+
+```ruby
+class SupportAgent < ApplicationAgent
+ before_action :check_rate_limit
+
+ private
+
+ def check_rate_limit
+ user_id = params[:user_id]
+ key = "support_agent:#{user_id}"
+ count = Rails.cache.increment(key, 1, expires_in: 1.minute)
+
+ if count > 10
+ raise "Rate limit exceeded. Please try again later."
+ end
+ end
+end
+```
+
+## Conclusion
+
+The Support Agent provides a simple, clear example of core ActiveAgent concepts. It demonstrates how to build conversational AI agents with tool calling, message context, and multimodal responses—all while maintaining familiar Rails patterns and conventions.
diff --git a/docs/examples/translation-agent.md b/docs/examples/translation-agent.md
new file mode 100644
index 00000000..b02b3087
--- /dev/null
+++ b/docs/examples/translation-agent.md
@@ -0,0 +1,78 @@
+---
+title: Translation Agent
+---
+# {{ $frontmatter.title }}
+
+The Translation Agent demonstrates how to create specialized agents for specific tasks like language translation.
+
+## Setup
+
+Generate a translation agent:
+
+```bash
+rails generate active_agent:agent translation translate
+```
+
+## Implementation
+
+```ruby
+class TranslationAgent < ApplicationAgent
+ generate_with :openai, instructions: "Translate the given text from one language to another."
+
+ def translate
+ prompt
+ end
+end
+```
+
+## Usage Examples
+
+### Basic Translation
+
+The translation agent accepts a message and target locale:
+
+```ruby
+translate_prompt = TranslationAgent.with(
+ message: "Hi, I'm Justin",
+ locale: "japanese"
+).translate
+
+puts translate_prompt.message.content
+# => "translate: Hi, I'm Justin; to japanese"
+
+puts translate_prompt.instructions
+# => "Translate the given text from one language to another."
+```
+
+### Translation Generation
+
+Generate a translation using the configured AI provider:
+
+```ruby
+response = TranslationAgent.with(
+ message: "Hi, I'm Justin",
+ locale: "japanese"
+).translate.generate_now
+
+puts response.message.content
+# => "こんにちは、私はジャスティンです。"
+```
+
+::: details Response Example
+
+:::
+
+## Key Features
+
+- **Action-based Translation**: Use the `translate` action to process translations
+- **Locale Support**: Pass target language as a parameter
+- **Prompt Templates**: Customize translation prompts through view templates
+- **Instruction Override**: Define custom translation instructions per agent
+
+## View Templates
+
+The translation agent uses view templates to format prompts:
+
+```erb
+translate: <%= params[:message] %>; to <%= params[:locale] %>
+```
diff --git a/docs/examples/web-search-agent.md b/docs/examples/web-search-agent.md
new file mode 100644
index 00000000..34c3e923
--- /dev/null
+++ b/docs/examples/web-search-agent.md
@@ -0,0 +1,335 @@
+---
+title: Web Search Agent
+---
+# {{ $frontmatter.title }}
+
+Active Agent provides web search capabilities through integration with OpenAI's search models and tools. The Web Search Agent demonstrates how to leverage both Chat Completions API (with search-preview models) and Responses API (with web_search_preview tool) for accessing real-time web information.
+
+## Overview
+
+The Web Search Agent shows two approaches to web search:
+- **Chat Completions API**: Uses special `gpt-4o-search-preview` model with built-in search
+- **Responses API**: Uses regular models with `web_search_preview` tool for more control
+
+## Features
+
+- **Current Events Search** - Search for recent news and information
+- **Configurable Context Size** - Control how much web context to include (low/medium/high)
+- **Location-Based Search** - Provide user location for localized results
+- **Multi-Tool Integration** - Combine web search with image generation and other tools
+
+## Setup
+
+Generate a web search agent:
+
+```bash
+rails generate active_agent:agent web_search search_current_events search_with_tools
+```
+
+## Agent Implementation
+
+```ruby
+# Example agent demonstrating web search capabilities
+# Works with both Chat Completions API and Responses API
+class WebSearchAgent < ApplicationAgent
+ # For Chat API, use the search-preview models
+ # For Responses API, use regular models with web_search_preview tool
+ generate_with :openai, model: "gpt-4o"
+
+ # Action for searching current events using Chat API with web search model
+ def search_current_events
+ @query = params[:query]
+ @location = params[:location]
+
+ # When using gpt-4o-search-preview model, web search is automatic
+ prompt(
+ message: @query,
+ options: chat_api_search_options
+ )
+ end
+
+ # Action for searching with Responses API (more flexible)
+ def search_with_tools
+ @query = params[:query]
+ @context_size = params[:context_size] || "medium"
+
+ prompt(
+ message: @query,
+ options: {
+ use_responses_api: true, # Force Responses API
+ tools: [
+ {
+ type: "web_search_preview",
+ search_context_size: @context_size
+ }
+ ]
+ }
+ )
+ end
+
+ # Action that combines web search with image generation (Responses API only)
+ def research_and_visualize
+ @topic = params[:topic]
+
+ prompt(
+ message: "Research #{@topic} and create a visualization",
+ options: {
+ model: "gpt-5", # Responses API model
+ use_responses_api: true,
+ tools: [
+ { type: "web_search_preview", search_context_size: "high" },
+ { type: "image_generation", size: "1024x1024", quality: "high" }
+ ]
+ }
+ )
+ end
+
+ private
+
+ def chat_api_search_options
+ options = {
+ model: "gpt-4o-search-preview" # Special model for Chat API web search
+ }
+
+ # Add web_search_options for Chat API
+ if @location
+ options[:web_search] = {
+ user_location: format_location(@location)
+ }
+ else
+ options[:web_search] = {} # Enable web search with defaults
+ end
+
+ options
+ end
+
+ def format_location(location)
+ # Format location for API
+ {
+ country: location[:country] || "US",
+ city: location[:city],
+ region: location[:region],
+ timezone: location[:timezone]
+ }.compact
+ end
+end
+```
+
+## Usage Examples
+
+### Chat API with Search Preview Model
+
+Use the Chat Completions API with the special search-preview model:
+
+```ruby
+response = WebSearchAgent.with(
+ query: "Latest developments in AI for 2025"
+).search_current_events.generate_now
+
+puts response.message.content
+# => Returns current information about AI developments
+```
+
+### Chat API with Location Context
+
+Provide location information for localized search results:
+
+```ruby
+response = WebSearchAgent.with(
+ query: "Best restaurants near me",
+ location: {
+ country: "US",
+ city: "San Francisco",
+ region: "CA"
+ }
+).search_current_events.generate_now
+
+puts response.message.content
+# => Returns San Francisco restaurant recommendations
+```
+
+### Responses API with Web Search Tool
+
+Use the Responses API for more control over search context:
+
+```ruby
+response = WebSearchAgent.with(
+ query: "Latest Ruby on Rails 8 features",
+ context_size: "high" # Options: low, medium, high
+).search_with_tools.generate_now
+
+puts response.message.content
+# => Returns comprehensive information about Rails 8
+```
+
+### Combining Web Search with Image Generation
+
+Use multiple tools together in the Responses API:
+
+```ruby
+response = WebSearchAgent.with(
+ topic: "Climate Change Impact 2025"
+).research_and_visualize.generate_now
+
+# Response includes both research findings and generated visualizations
+puts response.message.content
+```
+
+## Configuration Options
+
+### Search Context Size
+
+Control how much web context to include:
+
+```ruby
+{
+ type: "web_search_preview",
+ search_context_size: "high" # Options: low, medium, high
+}
+```
+
+- **low**: Minimal web context, faster responses
+- **medium**: Balanced context and speed (default)
+- **high**: Maximum web context, most comprehensive
+
+### User Location
+
+Provide location for localized results:
+
+```ruby
+{
+ web_search: {
+ user_location: {
+ country: "US",
+ city: "New York",
+ region: "NY",
+ timezone: "America/New_York"
+ }
+ }
+}
+```
+
+## API Comparison
+
+### Chat Completions API
+- Uses `gpt-4o-search-preview` model
+- Web search is automatic when model is specified
+- Location can be provided via `web_search` options
+- Simpler configuration for basic search needs
+
+### Responses API
+- Uses regular models (gpt-4o, gpt-5) with `web_search_preview` tool
+- More control over search parameters
+- Can combine with other tools (image generation, MCP, etc.)
+- Required for multi-tool workflows
+
+## Best Practices
+
+### When to Use Chat API
+- Simple web search queries
+- When you need quick current information
+- Location-based searches
+- Straightforward question-answering
+
+### When to Use Responses API
+- Combining web search with other tools
+- Need fine-grained control over search context
+- Building complex workflows
+- Integrating with MCP or custom tools
+
+### Search Query Tips
+- Be specific in your queries for better results
+- Use natural language questions
+- Specify time ranges when needed ("latest", "2025", "recent")
+- Include context for ambiguous terms
+
+## Integration with Rails
+
+Use web search in your Rails controllers:
+
+```ruby
+class SearchController < ApplicationController
+ def search
+ response = WebSearchAgent.with(
+ query: params[:q],
+ context_size: params[:detail] || "medium"
+ ).search_with_tools.generate_now
+
+ render json: {
+ query: params[:q],
+ answer: response.message.content,
+ sources: extract_sources(response)
+ }
+ end
+
+ private
+
+ def extract_sources(response)
+ # Extract citation URLs from response if available
+ response.message.content.scan(/\[(\d+)\]/).flatten
+ end
+end
+```
+
+## Provider Support
+
+Web search capabilities require OpenAI provider:
+
+```ruby
+# config/application.rb
+config.active_agent.providers = {
+ openai: {
+ api_key: ENV["OPENAI_API_KEY"]
+ }
+}
+```
+
+::: tip Model Availability
+- **Chat API Search**: Requires `gpt-4o-search-preview` or newer
+- **Responses API**: Requires `gpt-5` or compatible models
+- Check OpenAI documentation for the latest model availability
+:::
+
+## Advanced Usage
+
+### Caching Search Results
+
+Cache expensive search operations:
+
+```ruby
+class WebSearchAgent < ApplicationAgent
+ def cached_search
+ @query = params[:query]
+ cache_key = "web_search:#{Digest::MD5.hexdigest(@query)}"
+
+ Rails.cache.fetch(cache_key, expires_in: 1.hour) do
+ search_with_tools.generate_now
+ end
+ end
+end
+```
+
+### Rate Limiting
+
+Implement rate limiting for search requests:
+
+```ruby
+class WebSearchAgent < ApplicationAgent
+ before_action :check_rate_limit
+
+ private
+
+ def check_rate_limit
+ key = "search_rate:#{params[:user_id]}"
+ count = Rails.cache.increment(key, 1, expires_in: 1.hour)
+
+ if count > 100
+ raise "Rate limit exceeded"
+ end
+ end
+end
+```
+
+## Conclusion
+
+The Web Search Agent demonstrates ActiveAgent's ability to leverage OpenAI's web search capabilities for accessing real-time information. Whether using the Chat API for simplicity or the Responses API for advanced workflows, web search integration enables agents to provide current, factual information beyond their training data.
diff --git a/docs/feature-proposal-json-tool-outputs.md b/docs/feature-proposal-json-tool-outputs.md
deleted file mode 100644
index 6a8c796f..00000000
--- a/docs/feature-proposal-json-tool-outputs.md
+++ /dev/null
@@ -1,216 +0,0 @@
-# Feature Proposal: JSON Tool Outputs for Actions
-
-## Overview
-
-Currently, ActiveAgent supports JSON output through `output_schema` for generation providers, but actions that render tool JSON schemas with tool output schemas are not yet supported. This proposal outlines how this feature could work from a developer API perspective.
-
-## Current State
-
-- Actions can render prompts with various formats (text, html, json)
-- Generation providers support `output_schema` for structured JSON responses
-- Tools/functions are defined in the agent but don't have a way to specify output schemas for their JSON responses
-
-## Proposed Feature
-
-### 1. Action Definition with Tool Output Schema
-
-```ruby
-class TravelAgent < ApplicationAgent
- # Define a tool with output schema
- def book
- prompt(
- message: params[:message],
- content_type: :json,
- template: "travel_agent/book",
- tool_output_schema: {
- type: "object",
- properties: {
- booking_id: { type: "string" },
- status: { type: "string", enum: ["confirmed", "pending", "failed"] },
- price: { type: "number" },
- details: {
- type: "object",
- properties: {
- flight: { type: "string" },
- hotel: { type: "string" },
- dates: {
- type: "object",
- properties: {
- check_in: { type: "string", format: "date" },
- check_out: { type: "string", format: "date" }
- }
- }
- }
- }
- },
- required: ["booking_id", "status", "price"]
- }
- )
- end
-end
-```
-
-### 2. Template Support
-
-The JSON template would need to conform to the defined schema:
-
-```erb
-<%# app/views/travel_agent/book.json.erb %>
-{
- "booking_id": "<%= @prompt.booking_id %>",
- "status": "<%= @prompt.status %>",
- "price": <%= @prompt.price %>,
- "details": {
- "flight": "<%= @prompt.flight_number %>",
- "hotel": "<%= @prompt.hotel_name %>",
- "dates": {
- "check_in": "<%= @prompt.check_in_date %>",
- "check_out": "<%= @prompt.check_out_date %>"
- }
- }
-}
-```
-
-### 3. ActionPrompt::Base Changes
-
-The `prompt` method in `ActionPrompt::Base` would need to be updated to:
-
-1. Accept `tool_output_schema` parameter
-2. Validate the rendered JSON against the schema
-3. Include the schema in the tool definition sent to the generation provider
-
-```ruby
-# lib/active_agent/action_prompt/base.rb
-def prompt(message: nil, context: {}, content_type: nil, template: nil, tool_output_schema: nil)
- # ... existing code ...
-
- if tool_output_schema && content_type == :json
- # Register this action as a tool with output schema
- register_tool_with_schema(action_name, tool_output_schema)
-
- # Validate rendered output against schema
- validate_json_output(rendered_content, tool_output_schema)
- end
-
- # ... rest of implementation ...
-end
-```
-
-### 4. Tool Registration
-
-Tools would be automatically registered with their schemas:
-
-```ruby
-class ApplicationAgent < ActiveAgent::Base
- def self.tools
- @tools ||= actions.map do |action|
- if action.tool_output_schema
- {
- type: "function",
- function: {
- name: action.name,
- description: action.description,
- parameters: action.input_schema,
- output: action.tool_output_schema # New field
- }
- }
- else
- # Existing tool definition without output schema
- end
- end
- end
-end
-```
-
-## Benefits
-
-1. **Type Safety**: Ensures tool outputs conform to expected schemas
-2. **Better AI Integration**: Generation providers can understand what format to expect from tools
-3. **Developer Experience**: Clear contract for what each tool returns
-4. **Documentation**: Tool output schemas serve as documentation
-
-## Implementation Considerations
-
-1. **Schema Validation**: Need to add JSON Schema validation for tool outputs
-2. **Error Handling**: What happens when output doesn't match schema?
-3. **Backwards Compatibility**: Ensure existing tools without output schemas continue to work
-4. **Generation Provider Support**: Different providers may handle tool output schemas differently
-
-## Example Use Cases
-
-### 1. E-commerce Order Processing
-```ruby
-def process_order
- prompt(
- message: params[:order_details],
- content_type: :json,
- tool_output_schema: {
- type: "object",
- properties: {
- order_id: { type: "string" },
- total: { type: "number" },
- items: {
- type: "array",
- items: {
- type: "object",
- properties: {
- sku: { type: "string" },
- quantity: { type: "integer" },
- price: { type: "number" }
- }
- }
- }
- }
- }
- )
-end
-```
-
-### 2. Data Analysis Results
-```ruby
-def analyze_data
- prompt(
- message: params[:query],
- content_type: :json,
- tool_output_schema: {
- type: "object",
- properties: {
- summary: { type: "string" },
- metrics: {
- type: "object",
- properties: {
- mean: { type: "number" },
- median: { type: "number" },
- std_dev: { type: "number" }
- }
- },
- chart_data: {
- type: "array",
- items: {
- type: "object",
- properties: {
- x: { type: "number" },
- y: { type: "number" }
- }
- }
- }
- }
- }
- )
-end
-```
-
-## Next Steps
-
-1. Prototype the changes to `ActionPrompt::Base`
-2. Add JSON Schema validation library dependency
-3. Update generation provider integrations to support tool output schemas
-4. Create comprehensive test suite
-5. Update documentation and examples
-
-## Questions for Discussion
-
-1. Should we enforce schema validation or make it optional?
-2. How should we handle schema validation errors during development vs production?
-3. Should tool output schemas be defined at the class level or action level?
-4. Do we need to support schema references ($ref) for complex schemas?
\ No newline at end of file
diff --git a/docs/framework.md b/docs/framework.md
new file mode 100644
index 00000000..597d920a
--- /dev/null
+++ b/docs/framework.md
@@ -0,0 +1,154 @@
+---
+title: Active Agent
+---
+# {{ $frontmatter.title }}
+
+ActiveAgent extends Rails MVC to AI interactions. Build intelligent agents using familiar patterns—controllers, actions, callbacks, and views.
+
+## Quick Example
+
+::: code-group
+<<< @/../test/docs/framework_examples_test.rb#quick_example_support_agent{ruby:line-numbers} [support_agent.rb]
+<<< @/../test/dummy/app/views/agents/framework_examples_test/quick_example_test/support/instructions.md.erb{md:line-numbers} [support_agent/instructions.md.erb]
+:::
+
+**Usage:**
+
+<<< @/../test/docs/framework_examples_test.rb#quick_example_support_agent_usage{ruby:line-numbers}
+
+::: details Response Example
+
+:::
+
+## Agent Oriented Programming
+
+ActiveAgent applies Agent Oriented Programming (AOP) to Rails—a paradigm where agents are the primary building blocks. Agents combine behavior (instructions), state (context), and capabilities (tools) into autonomous components.
+
+**Programming Paradigm Shift:**
+
+| Concept | Object-Oriented | Agent-Oriented |
+|---------|----------------|----------------|
+| **Unit** | Object | Agent |
+| **Parameters** | message, args, block | prompt, context, tools |
+| **Computation** | method, send, return | perform, generate, response |
+| **State** | instance variables | prompt context |
+| **Flow** | method calls | prompt-response cycles |
+| **Constraints** | coded logic | written instructions |
+
+Write instructions instead of algorithms. Define context instead of managing state. Coordinate through prompts instead of method chains.
+
+## Understanding Agents
+
+Agents mirror how users interact with systems—they have identity, behavior, and goals:
+
+| Aspect | User | Agent |
+|--------|------|-------|
+| **Who** | Persona | Archetype |
+| **Behavior** | Stories | Instructions |
+| **State** | Scenario | Context |
+| **What** | Objective | Goal |
+| **How** | Actions | Tools |
+
+When you define an agent, you create a specialized participant that interacts with your application through prompts, maintains conversation context, and uses tools to accomplish objectives.
+
+## Core Architecture
+
+
+
+**Three Key Objects:**
+
+- **Agent** (Controller) - Manages lifecycle, defines actions, configures providers
+- **Generation** (Request Proxy) - Coordinates execution, holds configuration, provides synchronous/async methods. Created by invocation, it's lazy—execution doesn't start until you call `.generate_now`, `.embed_now`, or `.generate_later`.
+- **Response** (Result) - Contains messages, metadata, token usage, and parsed output. Returned after Generation executes.
+
+**Request-Response Lifecycle:**
+
+1. **Invocation** → Generation object created with parameters
+2. **Callbacks** → `before_generation` hooks execute
+3. **Action** → Agent method called (optional for direct invocations)
+4. **Prompt/Embed** → `prompt()` or `embed()` configures request context
+5. **Template** → ERB view renders (if template exists)
+6. **Request** → Provider request built with messages, tools, options
+7. **Execution** → API called (with streaming/tool execution if configured)
+8. **Processing** → Response parsed, messages extracted
+9. **Callbacks** → `after_generation` hooks execute
+10. **Return** → Response object with message and metadata
+
+**Three Invocation Patterns:**
+
+<<< @/../test/docs/framework_examples_test.rb#invocation_pattern_direct{ruby:line-numbers}
+<<< @/../test/docs/framework_examples_test.rb#invocation_pattern_parameterized{ruby:line-numbers}
+<<< @/../test/docs/framework_examples_test.rb#invocation_pattern_action_based{ruby:line-numbers}
+
+See [Generation](/agents/generation) for complete execution details.
+
+## MVC Mapping
+
+ActiveAgent maps Rails MVC patterns to AI interactions:
+
+### Model: Prompt Interface
+
+The **prompt** and **embed** interfaces are runtime configuration objects built inside agent actions. Calling `prompt(message: "...", tools: [...])` or `embed(input: "...")` returns a Generation object configured with messages, tools, response_format, temperature, and other parameters that define the AI request.
+
+Use these methods in your action methods to build the request context before execution. See [Messages](/actions/messages) for complete details.
+
+### View: Message Templates
+
+**ERB templates** render instructions, messages, and schemas for AI requests. Templates are optional—you can pass strings or hashes directly.
+
+- **Instructions** - System prompts that guide agent behavior (`.text.erb`, `.md.erb`)
+- **Messages** - User/assistant conversation content (`.text.erb`, `.md.erb`, `.html.erb`)
+- **Schemas** - JSON response format definitions (`.json`)
+
+See [Instructions](/agents/instructions), [Messages](/actions/messages), and [Structured Output](/actions/structured_output) for template patterns.
+
+### Controller: Agents
+
+**Agents** are controllers with actions (public methods), callbacks (`before_generation`, `after_generation`), and provider configuration (`generate_with`, `embed_with`).
+
+Actions call `prompt()` or `embed()` to configure requests. Callbacks manage context and side effects. Configuration sets defaults for model, temperature, and other options. See [Agents](/agents) for complete patterns.
+
+## Integration Points
+
+ActiveAgent integrates with Rails features and AI capabilities:
+
+- **[Providers](/providers)** - Swap AI services (OpenAI, Anthropic, Ollama, OpenRouter)
+- **[Instructions](/agents/instructions)** - System prompts from templates or strings
+- **[Callbacks](/agents/callbacks)** - Lifecycle hooks for context and logging
+- **[Tools](/actions/tools)** - Agent methods as AI-callable functions
+- **[Structured Output](/actions/structured_output)** - JSON schemas for response format
+- **[Streaming](/agents/streaming)** - Real-time response updates
+- **[Messages](/actions/messages)** - Multimodal conversation context
+- **[Embeddings](/actions/embeddings)** - Vector generation for semantic search
+
+## Next Steps
+
+**Start Here:**
+- **[Getting Started](/getting-started)** - Build your first agent (step-by-step tutorial)
+- **[Agents](/agents)** - Deep dive into agent patterns and lifecycle
+- **[Actions](/actions)** - Define capabilities with messages, tools, and schemas
+
+**Core Features:**
+- [Generation](/agents/generation) - Synchronous and asynchronous execution
+- [Instructions](/agents/instructions) - System prompts and behavior guidance
+- [Messages](/actions/messages) - Conversation context with multimodal support
+- [Providers](/providers) - OpenAI, Anthropic, Ollama, OpenRouter configuration
+
+**Advanced:**
+- [Tools](/actions/tools) - AI-callable Ruby methods and MCP integration
+- [Structured Output](/actions/structured_output) - JSON schemas and validation
+- [Streaming](/agents/streaming) - Real-time response updates
+- [Callbacks](/agents/callbacks) - Lifecycle hooks and event handling
+- [Testing](/framework/testing) - Test agents with fixtures and VCR
+
+**Rails Integration:**
+- [Configuration](/framework/configuration) - Environment-specific settings
+- [Instrumentation](/framework/instrumentation) - Logging and monitoring
+- [Rails Integration](/framework/rails-integration) - ActionCable, ActiveJob, and more
+
+**Examples:**
+- [Data Extraction](/examples/data-extraction-agent) - Parse structured data from documents
+- [Translation](/examples/translation-agent) - Multi-step translation workflows
+- [Travel Agent](/examples/travel-agent) - Tool use and multi-turn conversations
+- [Browser Use](/examples/browser-use-agent) - Web scraping with AI
+
diff --git a/docs/framework/configuration.md b/docs/framework/configuration.md
new file mode 100644
index 00000000..cd3b57ce
--- /dev/null
+++ b/docs/framework/configuration.md
@@ -0,0 +1,328 @@
+# Configuration
+
+ActiveAgent provides flexible configuration options for both framework-level settings and provider-specific configurations. Configure global behavior like retry strategies and logging, or define multiple AI providers with environment-specific settings.
+
+## Global Settings
+
+Configure framework-level behavior using `ActiveAgent.configure`:
+
+```ruby
+ActiveAgent.configure do |config|
+ # Retry configuration (see Retries documentation for details)
+ config.retries = true
+ config.retries_count = 3
+
+ # Logging (non-Rails only)
+ config.logger = Logger.new(STDOUT)
+ config.logger.level = Logger::INFO
+end
+```
+
+### Reference
+
+| Setting | Type | Default | Description |
+|---------|------|---------|-------------|
+| `retries` | Boolean, Proc | `true` | Retry strategy for failed requests |
+| `retries_count` | Integer | `3` | Maximum retry attempts |
+| `retries_on` | Array\
- details: Agents are Controllers with a common Generation API with enhanced memory and tooling.
+ details: Controllers for AI. Define actions, manage context, and generate responses using Rails conventions.
- title: Actions
icon: 🦾
- link: /docs/action-prompt/actions
- details: Actions are tools for Agents to interact with systems and code.
+ link: /actions/actions
+ details: Public methods that render prompts or execute tools. Use ERB templates for complex formatting.
- title: Prompts
icon: 📝
- link: /docs/action-prompt/prompts
- details: Prompts are rendered with Action View. Agents can generate content using Action View.
- - title: Generation Providers
+ link: /actions/prompts
+ details: Runtime context with messages, actions, and parameters passed to AI providers.
+ - title: Providers
icon: 🏭
- link: /docs/framework/generation-provider
- details: Generation Providers establish a common interface for different AI service providers.
- - title: Queued Generation
- link: /docs/active-agent/queued-generation
- icon: ⏳
- details: Queued Generation manages asynchronous prompt generation and response cycles with Active Job.
+ link: /framework/providers
+ details: Unified interface for OpenAI, Anthropic, Ollama, and OpenRouter. Switch with one line.
+ - title: Tool Calling
+ icon: 🔧
+ link: /actions/tool-calling
+ details: Let AI agents call Ruby methods to fetch data, perform actions, and make decisions.
+ - title: Structured Output
+ icon: 📊
+ link: /agents/structured-output
+ details: Extract data into validated JSON schemas. Perfect for forms, APIs, and data processing.
- title: Streaming
- link: /docs/active-agent/callbacks#on-stream-callbacks
icon: 📡
- details: Streaming allows for real-time dynamic UI updates based on user & agent interactions, enhancing user experience and responsiveness in AI-driven applications.
+ link: /agents/callbacks#on-stream-callbacks
+ details: Real-time response streaming with Server-Sent Events for dynamic UIs.
- title: Callbacks
- link: /docs/active-agent/callbacks
icon: 🔄
- details: Callbacks enable contextual prompting using retrieval before_action or persistence after_generation.
- - title: Structured Output
- link: /docs/active-agent/structured-output
- icon: 📊
- details: Structured Output allows agents to return structured data in JSON format, enabling easier parsing and integration with other systems.
- # - title: Generative UI
- # link: /docs/active-agent/generative-ui
- # icon: 🖼️
- # details: Generative UI allows for dynamic and interactive user interfaces that adapt based on AI-generated interactions and content, enhancing user engagement and experience.
- # - title: RAG
- # icon: 📚
- # details: Retrieval Augmented Generation enables agents to access external data sources, enhancing their capabilities and providing more accurate and contextually relevant responses. While RAG has become synonymous with vector databases, it can also be used with traditional databases.
- # - title: Memory
- # icon: 🧠
- # details: Memory allows agents to retain information across sessions, enabling personalized and context-aware interactions with users.
- # - title: Lightweight
- # icon: ⚡
- # details: Active Agent keeps things simple, no multi-step workflows or unnecessary complexity. It integrates directly into your Rails app with clear separation of concerns, making AI features easy to implement and maintain. With less than 10 lines of code, you can ship an AI feature.
- # - title: Rails-Native
- # icon: 🚀
- # details: Active Agent is built explicitly for Rails, following familiar patterns for concise, effortless integrations with your existing stack. It is the only comprehensive solution that truly embraces Rails conventions.
- # - title: Flexible
- # icon: 🧩
- # details: Active Agent works seamlessly with tools like LangChain Ruby, pgvector, and the neighbors gem. Its agent-based architecture handles tool calls, renders prompts, and generates vector embeddings for pgvector with ease.
+ link: /agents/callbacks
+ details: Lifecycle hooks for retrieval, context management, and response handling.
+ - title: Queued Generation
+ link: /agents/queued-generation
+ icon: ⏳
+ details: Background processing with Active Job for async AI operations at scale.
+ - title: Testing
+ icon: 🧪
+ link: /framework/testing
+ details: Test with fixtures and VCR cassettes. Mock providers for fast, reliable tests.
+ - title: Embeddings
+ icon: 🎯
+ link: /framework/embeddings
+ details: Generate vector embeddings for semantic search, clustering, and RAG applications.
+ - title: Rails-Native
+ icon: 🚀
+ link: /framework/agents
+ details: Built for Rails. Familiar patterns, zero learning curve, production-ready from day one.
---
diff --git a/docs/parts/examples/structured-output-json-parsing-test.rb-test-structured-output-sets-content-type-to-application/json-and-auto-parses-JSON.md b/docs/parts/examples/structured-output-json-parsing-test.rb-test-structured-output-sets-content-type-to-application/json-and-auto-parses-JSON.md
deleted file mode 100644
index b8cf1eff..00000000
--- a/docs/parts/examples/structured-output-json-parsing-test.rb-test-structured-output-sets-content-type-to-application/json-and-auto-parses-JSON.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-[activeagent/test/integration/structured_output_json_parsing_test.rb:69](vscode://file//Users/justinbowen/Documents/GitHub/claude-could/activeagent/test/integration/structured_output_json_parsing_test.rb:69)
-
-
-```ruby
-# Response object
-#
details: Agents are Controllers with a common Generation API with enhanced memory and tooling.
- - title: Actions
+ - title: Actions
icon: 🦾
- link: /docs/action-prompt/actions
- details: Actions are tools for Agents to interact with systems and code.
+ link: /actions/actions
+ details: Actions organize agent behaviors. Optionally use Action View templates for complex formatting.
- title: Prompts
icon: 📝
- link: /docs/action-prompt/prompts
- details: Prompts are rendered with Action View. Agents can generate content using Action View.
- - title: Generation Providers
+ link: /actions/prompts
+ details: Prompts contain the runtime context, messages, and configuration for AI generation.
+ - title: Providers
icon: 🏭
- link: /docs/framework/generation-provider
- details: Generation Providers establish a common interface for different AI service providers.
+ link: /framework/providers
+ details: Providers establish a common interface for different AI service providers.
- title: Queued Generation
icon: ⏳
details: Queued Generation manages asynchronous prompt generation and response cycles with Active Job.
- title: Streaming
- link: /docs/active-agent/callbacks#streaming
+ link: /agents/callbacks#streaming
icon: 📡
details: Streaming allows for real-time dynamic UI updates based on user & agent interactions, enhancing user experience and responsiveness in AI-driven applications.
- title: Callbacks
icon: 🔄
details: Callbacks enable contextual prompting using retrieval before_action or persistence after_generation.
- title: Generative UI
- link: /docs/active-agent/generative-ui
+ link: /agents/generative-ui
icon: 🖼️
details: Generative UI allows for dynamic and interactive user interfaces that adapt based on AI-generated interactions and content, enhancing user engagement and experience.
---
-- <%%= @message %>, find me in <%= @path %> -
diff --git a/lib/generators/erb/templates/view.json.erb.tt b/lib/generators/erb/templates/view.json.erb.tt deleted file mode 100644 index 559e9456..00000000 --- a/lib/generators/erb/templates/view.json.erb.tt +++ /dev/null @@ -1,16 +0,0 @@ -<%%= { - type: :function, - function: { - name: action_name, - description: "This action takes no params and gets a random cat image and returns it as a base64 string.", - parameters: { - type: :object, - properties: { - param_name: { - type: :string, - description: "The param_description" - } - } - } - } -}.to_json.html_safe %> \ No newline at end of file diff --git a/lib/generators/test_unit/agent_generator.rb b/lib/generators/test_unit/agent_generator.rb index f33526df..2fbde391 100644 --- a/lib/generators/test_unit/agent_generator.rb +++ b/lib/generators/test_unit/agent_generator.rb @@ -17,7 +17,7 @@ def create_test_files end def create_preview_files - template "preview.rb", File.join("test/agents/previews", class_path, "#{file_name}_agent_preview.rb") + template "preview.rb", File.join("test/docs/previews", class_path, "#{file_name}_agent_preview.rb") end private diff --git a/lib/generators/test_unit/templates/functional_test.rb.tt b/lib/generators/test_unit/templates/functional_test.rb.tt index 64523e7c..80900bf8 100644 --- a/lib/generators/test_unit/templates/functional_test.rb.tt +++ b/lib/generators/test_unit/templates/functional_test.rb.tt @@ -7,8 +7,10 @@ class <%= class_name %>AgentTest < ActiveAgent::TestCase <% end -%> test "<%= action %>" do - agent = <%= class_name %>Agent.<%= action %> - assert_equal <%= action.to_s.humanize.inspect %>, agent.prompt_context + generation = <%= class_name %>Agent.<%= action %> + assert_not_nil generation + # Test the generation by calling generate_now if needed + # response = generation.generate_now end <% end -%> <% if actions.blank? -%> diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml new file mode 100644 index 00000000..b14c61a6 --- /dev/null +++ b/pnpm-lock.yaml @@ -0,0 +1,1812 @@ +lockfileVersion: '9.0' + +settings: + autoInstallPeers: true + excludeLinksFromLockfile: false + +importers: + + .: + dependencies: + vitepress-plugin-group-icons: + specifier: ^1.5.2 + version: 1.6.4(markdown-it@14.1.0)(vite@5.4.21) + devDependencies: + vitepress: + specifier: ^1.6.3 + version: 1.6.4(@algolia/client-search@5.40.1)(postcss@8.5.6)(search-insights@2.17.3) + vitepress-plugin-tabs: + specifier: ^0.7.1 + version: 0.7.3(vitepress@1.6.4(@algolia/client-search@5.40.1)(postcss@8.5.6)(search-insights@2.17.3))(vue@3.5.22) + +packages: + + '@algolia/abtesting@1.6.1': + resolution: {integrity: sha512-wV/gNRkzb7sI9vs1OneG129hwe3Q5zPj7zigz3Ps7M5Lpo2hSorrOnXNodHEOV+yXE/ks4Pd+G3CDFIjFTWhMQ==} + engines: {node: '>= 14.0.0'} + + '@algolia/autocomplete-core@1.17.7': + resolution: {integrity: sha512-BjiPOW6ks90UKl7TwMv7oNQMnzU+t/wk9mgIDi6b1tXpUek7MW0lbNOUHpvam9pe3lVCf4xPFT+lK7s+e+fs7Q==} + + '@algolia/autocomplete-plugin-algolia-insights@1.17.7': + resolution: {integrity: sha512-Jca5Ude6yUOuyzjnz57og7Et3aXjbwCSDf/8onLHSQgw1qW3ALl9mrMWaXb5FmPVkV3EtkD2F/+NkT6VHyPu9A==} + peerDependencies: + search-insights: '>= 1 < 3' + + '@algolia/autocomplete-preset-algolia@1.17.7': + resolution: {integrity: sha512-ggOQ950+nwbWROq2MOCIL71RE0DdQZsceqrg32UqnhDz8FlO9rL8ONHNsI2R1MH0tkgVIDKI/D0sMiUchsFdWA==} + peerDependencies: + '@algolia/client-search': '>= 4.9.1 < 6' + algoliasearch: '>= 4.9.1 < 6' + + '@algolia/autocomplete-shared@1.17.7': + resolution: {integrity: sha512-o/1Vurr42U/qskRSuhBH+VKxMvkkUVTLU6WZQr+L5lGZZLYWyhdzWjW0iGXY7EkwRTjBqvN2EsR81yCTGV/kmg==} + peerDependencies: + '@algolia/client-search': '>= 4.9.1 < 6' + algoliasearch: '>= 4.9.1 < 6' + + '@algolia/client-abtesting@5.40.1': + resolution: {integrity: sha512-cxKNATPY5t+Mv8XAVTI57altkaPH+DZi4uMrnexPxPHODMljhGYY+GDZyHwv9a+8CbZHcY372OkxXrDMZA4Lnw==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-analytics@5.40.1': + resolution: {integrity: sha512-XP008aMffJCRGAY8/70t+hyEyvqqV7YKm502VPu0+Ji30oefrTn2al7LXkITz7CK6I4eYXWRhN6NaIUi65F1OA==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-common@5.40.1': + resolution: {integrity: sha512-gWfQuQUBtzUboJv/apVGZMoxSaB0M4Imwl1c9Ap+HpCW7V0KhjBddqF2QQt5tJZCOFsfNIgBbZDGsEPaeKUosw==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-insights@5.40.1': + resolution: {integrity: sha512-RTLjST/t+lsLMouQ4zeLJq2Ss+UNkLGyNVu+yWHanx6kQ3LT5jv8UvPwyht9s7R6jCPnlSI77WnL80J32ZuyJg==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-personalization@5.40.1': + resolution: {integrity: sha512-2FEK6bUomBzEYkTKzD0iRs7Ljtjb45rKK/VSkyHqeJnG+77qx557IeSO0qVFE3SfzapNcoytTofnZum0BQ6r3Q==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-query-suggestions@5.40.1': + resolution: {integrity: sha512-Nju4NtxAvXjrV2hHZNLKVJLXjOlW6jAXHef/CwNzk1b2qIrCWDO589ELi5ZHH1uiWYoYyBXDQTtHmhaOVVoyXg==} + engines: {node: '>= 14.0.0'} + + '@algolia/client-search@5.40.1': + resolution: {integrity: sha512-Mw6pAUF121MfngQtcUb5quZVqMC68pSYYjCRZkSITC085S3zdk+h/g7i6FxnVdbSU6OztxikSDMh1r7Z+4iPlA==} + engines: {node: '>= 14.0.0'} + + '@algolia/ingestion@1.40.1': + resolution: {integrity: sha512-z+BPlhs45VURKJIxsR99NNBWpUEEqIgwt10v/fATlNxc4UlXvALdOsWzaFfe89/lbP5Bu4+mbO59nqBC87ZM/g==} + engines: {node: '>= 14.0.0'} + + '@algolia/monitoring@1.40.1': + resolution: {integrity: sha512-VJMUMbO0wD8Rd2VVV/nlFtLJsOAQvjnVNGkMkspFiFhpBA7s/xJOb+fJvvqwKFUjbKTUA7DjiSi1ljSMYBasXg==} + engines: {node: '>= 14.0.0'} + + '@algolia/recommend@5.40.1': + resolution: {integrity: sha512-ehvJLadKVwTp9Scg9NfzVSlBKH34KoWOQNTaN8i1Ac64AnO6iH2apJVSP6GOxssaghZ/s8mFQsDH3QIZoluFHA==} + engines: {node: '>= 14.0.0'} + + '@algolia/requester-browser-xhr@5.40.1': + resolution: {integrity: sha512-PbidVsPurUSQIr6X9/7s34mgOMdJnn0i6p+N6Ab+lsNhY5eiu+S33kZEpZwkITYBCIbhzDLOvb7xZD3gDi+USA==} + engines: {node: '>= 14.0.0'} + + '@algolia/requester-fetch@5.40.1': + resolution: {integrity: sha512-ThZ5j6uOZCF11fMw9IBkhigjOYdXGXQpj6h4k+T9UkZrF2RlKcPynFzDeRgaLdpYk8Yn3/MnFbwUmib7yxj5Lw==} + engines: {node: '>= 14.0.0'} + + '@algolia/requester-node-http@5.40.1': + resolution: {integrity: sha512-H1gYPojO6krWHnUXu/T44DrEun/Wl95PJzMXRcM/szstNQczSbwq6wIFJPI9nyE95tarZfUNU3rgorT+wZ6iCQ==} + engines: {node: '>= 14.0.0'} + + '@antfu/install-pkg@1.1.0': + resolution: {integrity: sha512-MGQsmw10ZyI+EJo45CdSER4zEb+p31LpDAFp2Z3gkSd1yqVZGi0Ebx++YTEMonJy4oChEMLsxZ64j8FH6sSqtQ==} + + '@antfu/utils@9.3.0': + resolution: {integrity: sha512-9hFT4RauhcUzqOE4f1+frMKLZrgNog5b06I7VmZQV1BkvwvqrbC8EBZf3L1eEL2AKb6rNKjER0sEvJiSP1FXEA==} + + '@babel/helper-string-parser@7.27.1': + resolution: {integrity: sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==} + engines: {node: '>=6.9.0'} + + '@babel/helper-validator-identifier@7.27.1': + resolution: {integrity: sha512-D2hP9eA+Sqx1kBZgzxZh0y1trbuU+JoDkiEwqhQ36nodYqJwyEIhPSdMNd7lOm/4io72luTPWH20Yda0xOuUow==} + engines: {node: '>=6.9.0'} + + '@babel/parser@7.28.4': + resolution: {integrity: sha512-yZbBqeM6TkpP9du/I2pUZnJsRMGGvOuIrhjzC1AwHwW+6he4mni6Bp/m8ijn0iOuZuPI2BfkCoSRunpyjnrQKg==} + engines: {node: '>=6.0.0'} + hasBin: true + + '@babel/types@7.28.4': + resolution: {integrity: sha512-bkFqkLhh3pMBUQQkpVgWDWq/lqzc2678eUyDlTBhRqhCHFguYYGM0Efga7tYk4TogG/3x0EEl66/OQ+WGbWB/Q==} + engines: {node: '>=6.9.0'} + + '@docsearch/css@3.8.2': + resolution: {integrity: sha512-y05ayQFyUmCXze79+56v/4HpycYF3uFqB78pLPrSV5ZKAlDuIAAJNhaRi8tTdRNXh05yxX/TyNnzD6LwSM89vQ==} + + '@docsearch/js@3.8.2': + resolution: {integrity: sha512-Q5wY66qHn0SwA7Taa0aDbHiJvaFJLOJyHmooQ7y8hlwwQLQ/5WwCcoX0g7ii04Qi2DJlHsd0XXzJ8Ypw9+9YmQ==} + + '@docsearch/react@3.8.2': + resolution: {integrity: sha512-xCRrJQlTt8N9GU0DG4ptwHRkfnSnD/YpdeaXe02iKfqs97TkZJv60yE+1eq/tjPcVnTW8dP5qLP7itifFVV5eg==} + peerDependencies: + '@types/react': '>= 16.8.0 < 19.0.0' + react: '>= 16.8.0 < 19.0.0' + react-dom: '>= 16.8.0 < 19.0.0' + search-insights: '>= 1 < 3' + peerDependenciesMeta: + '@types/react': + optional: true + react: + optional: true + react-dom: + optional: true + search-insights: + optional: true + + '@esbuild/aix-ppc64@0.21.5': + resolution: {integrity: sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==} + engines: {node: '>=12'} + cpu: [ppc64] + os: [aix] + + '@esbuild/android-arm64@0.21.5': + resolution: {integrity: sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==} + engines: {node: '>=12'} + cpu: [arm64] + os: [android] + + '@esbuild/android-arm@0.21.5': + resolution: {integrity: sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==} + engines: {node: '>=12'} + cpu: [arm] + os: [android] + + '@esbuild/android-x64@0.21.5': + resolution: {integrity: sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==} + engines: {node: '>=12'} + cpu: [x64] + os: [android] + + '@esbuild/darwin-arm64@0.21.5': + resolution: {integrity: sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==} + engines: {node: '>=12'} + cpu: [arm64] + os: [darwin] + + '@esbuild/darwin-x64@0.21.5': + resolution: {integrity: sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==} + engines: {node: '>=12'} + cpu: [x64] + os: [darwin] + + '@esbuild/freebsd-arm64@0.21.5': + resolution: {integrity: sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==} + engines: {node: '>=12'} + cpu: [arm64] + os: [freebsd] + + '@esbuild/freebsd-x64@0.21.5': + resolution: {integrity: sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==} + engines: {node: '>=12'} + cpu: [x64] + os: [freebsd] + + '@esbuild/linux-arm64@0.21.5': + resolution: {integrity: sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==} + engines: {node: '>=12'} + cpu: [arm64] + os: [linux] + + '@esbuild/linux-arm@0.21.5': + resolution: {integrity: sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==} + engines: {node: '>=12'} + cpu: [arm] + os: [linux] + + '@esbuild/linux-ia32@0.21.5': + resolution: {integrity: sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==} + engines: {node: '>=12'} + cpu: [ia32] + os: [linux] + + '@esbuild/linux-loong64@0.21.5': + resolution: {integrity: sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==} + engines: {node: '>=12'} + cpu: [loong64] + os: [linux] + + '@esbuild/linux-mips64el@0.21.5': + resolution: {integrity: sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==} + engines: {node: '>=12'} + cpu: [mips64el] + os: [linux] + + '@esbuild/linux-ppc64@0.21.5': + resolution: {integrity: sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==} + engines: {node: '>=12'} + cpu: [ppc64] + os: [linux] + + '@esbuild/linux-riscv64@0.21.5': + resolution: {integrity: sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==} + engines: {node: '>=12'} + cpu: [riscv64] + os: [linux] + + '@esbuild/linux-s390x@0.21.5': + resolution: {integrity: sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==} + engines: {node: '>=12'} + cpu: [s390x] + os: [linux] + + '@esbuild/linux-x64@0.21.5': + resolution: {integrity: sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==} + engines: {node: '>=12'} + cpu: [x64] + os: [linux] + + '@esbuild/netbsd-x64@0.21.5': + resolution: {integrity: sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==} + engines: {node: '>=12'} + cpu: [x64] + os: [netbsd] + + '@esbuild/openbsd-x64@0.21.5': + resolution: {integrity: sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==} + engines: {node: '>=12'} + cpu: [x64] + os: [openbsd] + + '@esbuild/sunos-x64@0.21.5': + resolution: {integrity: sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==} + engines: {node: '>=12'} + cpu: [x64] + os: [sunos] + + '@esbuild/win32-arm64@0.21.5': + resolution: {integrity: sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==} + engines: {node: '>=12'} + cpu: [arm64] + os: [win32] + + '@esbuild/win32-ia32@0.21.5': + resolution: {integrity: sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==} + engines: {node: '>=12'} + cpu: [ia32] + os: [win32] + + '@esbuild/win32-x64@0.21.5': + resolution: {integrity: sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==} + engines: {node: '>=12'} + cpu: [x64] + os: [win32] + + '@iconify-json/logos@1.2.9': + resolution: {integrity: sha512-G6VCdFnwZcrT6Eveq3m43oJfLw/CX8plwFcE+2jgv3fiGB64pTmnU7Yd1MNZ/eA+/Re2iEDhuCfSNOWTHwwK8w==} + + '@iconify-json/simple-icons@1.2.55': + resolution: {integrity: sha512-9vc04pmup/zcef8hDypWU8nMwMaFVkWuUzWkxyL++DVp5AA8baoJHK6RyKN1v+cvfR2agxkUb053XVggzFFkTA==} + + '@iconify-json/vscode-icons@1.2.32': + resolution: {integrity: sha512-UzZmL6hF02YGu/qEbpskEVnstlNJG+c+0PNzNYTIBf/dXylWHLUVufhOXqAzuGRjkUZ2q7rPpOEwLUPkhkFHUA==} + + '@iconify/types@2.0.0': + resolution: {integrity: sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg==} + + '@iconify/utils@3.0.2': + resolution: {integrity: sha512-EfJS0rLfVuRuJRn4psJHtK2A9TqVnkxPpHY6lYHiB9+8eSuudsxbwMiavocG45ujOo6FJ+CIRlRnlOGinzkaGQ==} + + '@jridgewell/sourcemap-codec@1.5.5': + resolution: {integrity: sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==} + + '@rollup/rollup-android-arm-eabi@4.52.5': + resolution: {integrity: sha512-8c1vW4ocv3UOMp9K+gToY5zL2XiiVw3k7f1ksf4yO1FlDFQ1C2u72iACFnSOceJFsWskc2WZNqeRhFRPzv+wtQ==} + cpu: [arm] + os: [android] + + '@rollup/rollup-android-arm64@4.52.5': + resolution: {integrity: sha512-mQGfsIEFcu21mvqkEKKu2dYmtuSZOBMmAl5CFlPGLY94Vlcm+zWApK7F/eocsNzp8tKmbeBP8yXyAbx0XHsFNA==} + cpu: [arm64] + os: [android] + + '@rollup/rollup-darwin-arm64@4.52.5': + resolution: {integrity: sha512-takF3CR71mCAGA+v794QUZ0b6ZSrgJkArC+gUiG6LB6TQty9T0Mqh3m2ImRBOxS2IeYBo4lKWIieSvnEk2OQWA==} + cpu: [arm64] + os: [darwin] + + '@rollup/rollup-darwin-x64@4.52.5': + resolution: {integrity: sha512-W901Pla8Ya95WpxDn//VF9K9u2JbocwV/v75TE0YIHNTbhqUTv9w4VuQ9MaWlNOkkEfFwkdNhXgcLqPSmHy0fA==} + cpu: [x64] + os: [darwin] + + '@rollup/rollup-freebsd-arm64@4.52.5': + resolution: {integrity: sha512-QofO7i7JycsYOWxe0GFqhLmF6l1TqBswJMvICnRUjqCx8b47MTo46W8AoeQwiokAx3zVryVnxtBMcGcnX12LvA==} + cpu: [arm64] + os: [freebsd] + + '@rollup/rollup-freebsd-x64@4.52.5': + resolution: {integrity: sha512-jr21b/99ew8ujZubPo9skbrItHEIE50WdV86cdSoRkKtmWa+DDr6fu2c/xyRT0F/WazZpam6kk7IHBerSL7LDQ==} + cpu: [x64] + os: [freebsd] + + '@rollup/rollup-linux-arm-gnueabihf@4.52.5': + resolution: {integrity: sha512-PsNAbcyv9CcecAUagQefwX8fQn9LQ4nZkpDboBOttmyffnInRy8R8dSg6hxxl2Re5QhHBf6FYIDhIj5v982ATQ==} + cpu: [arm] + os: [linux] + + '@rollup/rollup-linux-arm-musleabihf@4.52.5': + resolution: {integrity: sha512-Fw4tysRutyQc/wwkmcyoqFtJhh0u31K+Q6jYjeicsGJJ7bbEq8LwPWV/w0cnzOqR2m694/Af6hpFayLJZkG2VQ==} + cpu: [arm] + os: [linux] + + '@rollup/rollup-linux-arm64-gnu@4.52.5': + resolution: {integrity: sha512-a+3wVnAYdQClOTlyapKmyI6BLPAFYs0JM8HRpgYZQO02rMR09ZcV9LbQB+NL6sljzG38869YqThrRnfPMCDtZg==} + cpu: [arm64] + os: [linux] + + '@rollup/rollup-linux-arm64-musl@4.52.5': + resolution: {integrity: sha512-AvttBOMwO9Pcuuf7m9PkC1PUIKsfaAJ4AYhy944qeTJgQOqJYJ9oVl2nYgY7Rk0mkbsuOpCAYSs6wLYB2Xiw0Q==} + cpu: [arm64] + os: [linux] + + '@rollup/rollup-linux-loong64-gnu@4.52.5': + resolution: {integrity: sha512-DkDk8pmXQV2wVrF6oq5tONK6UHLz/XcEVow4JTTerdeV1uqPeHxwcg7aFsfnSm9L+OO8WJsWotKM2JJPMWrQtA==} + cpu: [loong64] + os: [linux] + + '@rollup/rollup-linux-ppc64-gnu@4.52.5': + resolution: {integrity: sha512-W/b9ZN/U9+hPQVvlGwjzi+Wy4xdoH2I8EjaCkMvzpI7wJUs8sWJ03Rq96jRnHkSrcHTpQe8h5Tg3ZzUPGauvAw==} + cpu: [ppc64] + os: [linux] + + '@rollup/rollup-linux-riscv64-gnu@4.52.5': + resolution: {integrity: sha512-sjQLr9BW7R/ZiXnQiWPkErNfLMkkWIoCz7YMn27HldKsADEKa5WYdobaa1hmN6slu9oWQbB6/jFpJ+P2IkVrmw==} + cpu: [riscv64] + os: [linux] + + '@rollup/rollup-linux-riscv64-musl@4.52.5': + resolution: {integrity: sha512-hq3jU/kGyjXWTvAh2awn8oHroCbrPm8JqM7RUpKjalIRWWXE01CQOf/tUNWNHjmbMHg/hmNCwc/Pz3k1T/j/Lg==} + cpu: [riscv64] + os: [linux] + + '@rollup/rollup-linux-s390x-gnu@4.52.5': + resolution: {integrity: sha512-gn8kHOrku8D4NGHMK1Y7NA7INQTRdVOntt1OCYypZPRt6skGbddska44K8iocdpxHTMMNui5oH4elPH4QOLrFQ==} + cpu: [s390x] + os: [linux] + + '@rollup/rollup-linux-x64-gnu@4.52.5': + resolution: {integrity: sha512-hXGLYpdhiNElzN770+H2nlx+jRog8TyynpTVzdlc6bndktjKWyZyiCsuDAlpd+j+W+WNqfcyAWz9HxxIGfZm1Q==} + cpu: [x64] + os: [linux] + + '@rollup/rollup-linux-x64-musl@4.52.5': + resolution: {integrity: sha512-arCGIcuNKjBoKAXD+y7XomR9gY6Mw7HnFBv5Rw7wQRvwYLR7gBAgV7Mb2QTyjXfTveBNFAtPt46/36vV9STLNg==} + cpu: [x64] + os: [linux] + + '@rollup/rollup-openharmony-arm64@4.52.5': + resolution: {integrity: sha512-QoFqB6+/9Rly/RiPjaomPLmR/13cgkIGfA40LHly9zcH1S0bN2HVFYk3a1eAyHQyjs3ZJYlXvIGtcCs5tko9Cw==} + cpu: [arm64] + os: [openharmony] + + '@rollup/rollup-win32-arm64-msvc@4.52.5': + resolution: {integrity: sha512-w0cDWVR6MlTstla1cIfOGyl8+qb93FlAVutcor14Gf5Md5ap5ySfQ7R9S/NjNaMLSFdUnKGEasmVnu3lCMqB7w==} + cpu: [arm64] + os: [win32] + + '@rollup/rollup-win32-ia32-msvc@4.52.5': + resolution: {integrity: sha512-Aufdpzp7DpOTULJCuvzqcItSGDH73pF3ko/f+ckJhxQyHtp67rHw3HMNxoIdDMUITJESNE6a8uh4Lo4SLouOUg==} + cpu: [ia32] + os: [win32] + + '@rollup/rollup-win32-x64-gnu@4.52.5': + resolution: {integrity: sha512-UGBUGPFp1vkj6p8wCRraqNhqwX/4kNQPS57BCFc8wYh0g94iVIW33wJtQAx3G7vrjjNtRaxiMUylM0ktp/TRSQ==} + cpu: [x64] + os: [win32] + + '@rollup/rollup-win32-x64-msvc@4.52.5': + resolution: {integrity: sha512-TAcgQh2sSkykPRWLrdyy2AiceMckNf5loITqXxFI5VuQjS5tSuw3WlwdN8qv8vzjLAUTvYaH/mVjSFpbkFbpTg==} + cpu: [x64] + os: [win32] + + '@shikijs/core@2.5.0': + resolution: {integrity: sha512-uu/8RExTKtavlpH7XqnVYBrfBkUc20ngXiX9NSrBhOVZYv/7XQRKUyhtkeflY5QsxC0GbJThCerruZfsUaSldg==} + + '@shikijs/engine-javascript@2.5.0': + resolution: {integrity: sha512-VjnOpnQf8WuCEZtNUdjjwGUbtAVKuZkVQ/5cHy/tojVVRIRtlWMYVjyWhxOmIq05AlSOv72z7hRNRGVBgQOl0w==} + + '@shikijs/engine-oniguruma@2.5.0': + resolution: {integrity: sha512-pGd1wRATzbo/uatrCIILlAdFVKdxImWJGQ5rFiB5VZi2ve5xj3Ax9jny8QvkaV93btQEwR/rSz5ERFpC5mKNIw==} + + '@shikijs/langs@2.5.0': + resolution: {integrity: sha512-Qfrrt5OsNH5R+5tJ/3uYBBZv3SuGmnRPejV9IlIbFH3HTGLDlkqgHymAlzklVmKBjAaVmkPkyikAV/sQ1wSL+w==} + + '@shikijs/themes@2.5.0': + resolution: {integrity: sha512-wGrk+R8tJnO0VMzmUExHR+QdSaPUl/NKs+a4cQQRWyoc3YFbUzuLEi/KWK1hj+8BfHRKm2jNhhJck1dfstJpiw==} + + '@shikijs/transformers@2.5.0': + resolution: {integrity: sha512-SI494W5X60CaUwgi8u4q4m4s3YAFSxln3tzNjOSYqq54wlVgz0/NbbXEb3mdLbqMBztcmS7bVTaEd2w0qMmfeg==} + + '@shikijs/types@2.5.0': + resolution: {integrity: sha512-ygl5yhxki9ZLNuNpPitBWvcy9fsSKKaRuO4BAlMyagszQidxcpLAr0qiW/q43DtSIDxO6hEbtYLiFZNXO/hdGw==} + + '@shikijs/vscode-textmate@10.0.2': + resolution: {integrity: sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==} + + '@types/estree@1.0.8': + resolution: {integrity: sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==} + + '@types/hast@3.0.4': + resolution: {integrity: sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==} + + '@types/linkify-it@5.0.0': + resolution: {integrity: sha512-sVDA58zAw4eWAffKOaQH5/5j3XeayukzDk+ewSsnv3p4yJEZHCCzMDiZM8e0OUrRvmpGZ85jf4yDHkHsgBNr9Q==} + + '@types/markdown-it@14.1.2': + resolution: {integrity: sha512-promo4eFwuiW+TfGxhi+0x3czqTYJkG8qB17ZUJiVF10Xm7NLVRSLUsfRTU/6h1e24VvRnXCx+hG7li58lkzog==} + + '@types/mdast@4.0.4': + resolution: {integrity: sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA==} + + '@types/mdurl@2.0.0': + resolution: {integrity: sha512-RGdgjQUZba5p6QEFAVx2OGb8rQDL/cPRG7GiedRzMcJ1tYnUANBncjbSB1NRGwbvjcPeikRABz2nshyPk1bhWg==} + + '@types/unist@3.0.3': + resolution: {integrity: sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==} + + '@types/web-bluetooth@0.0.21': + resolution: {integrity: sha512-oIQLCGWtcFZy2JW77j9k8nHzAOpqMHLQejDA48XXMWH6tjCQHz5RCFz1bzsmROyL6PUm+LLnUiI4BCn221inxA==} + + '@ungap/structured-clone@1.3.0': + resolution: {integrity: sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==} + + '@vitejs/plugin-vue@5.2.4': + resolution: {integrity: sha512-7Yx/SXSOcQq5HiiV3orevHUFn+pmMB4cgbEkDYgnkUWb0WfeQ/wa2yFv6D5ICiCQOVpjA7vYDXrC7AGO8yjDHA==} + engines: {node: ^18.0.0 || >=20.0.0} + peerDependencies: + vite: ^5.0.0 || ^6.0.0 + vue: ^3.2.25 + + '@vue/compiler-core@3.5.22': + resolution: {integrity: sha512-jQ0pFPmZwTEiRNSb+i9Ow/I/cHv2tXYqsnHKKyCQ08irI2kdF5qmYedmF8si8mA7zepUFmJ2hqzS8CQmNOWOkQ==} + + '@vue/compiler-dom@3.5.22': + resolution: {integrity: sha512-W8RknzUM1BLkypvdz10OVsGxnMAuSIZs9Wdx1vzA3mL5fNMN15rhrSCLiTm6blWeACwUwizzPVqGJgOGBEN/hA==} + + '@vue/compiler-sfc@3.5.22': + resolution: {integrity: sha512-tbTR1zKGce4Lj+JLzFXDq36K4vcSZbJ1RBu8FxcDv1IGRz//Dh2EBqksyGVypz3kXpshIfWKGOCcqpSbyGWRJQ==} + + '@vue/compiler-ssr@3.5.22': + resolution: {integrity: sha512-GdgyLvg4R+7T8Nk2Mlighx7XGxq/fJf9jaVofc3IL0EPesTE86cP/8DD1lT3h1JeZr2ySBvyqKQJgbS54IX1Ww==} + + '@vue/devtools-api@7.7.7': + resolution: {integrity: sha512-lwOnNBH2e7x1fIIbVT7yF5D+YWhqELm55/4ZKf45R9T8r9dE2AIOy8HKjfqzGsoTHFbWbr337O4E0A0QADnjBg==} + + '@vue/devtools-kit@7.7.7': + resolution: {integrity: sha512-wgoZtxcTta65cnZ1Q6MbAfePVFxfM+gq0saaeytoph7nEa7yMXoi6sCPy4ufO111B9msnw0VOWjPEFCXuAKRHA==} + + '@vue/devtools-shared@7.7.7': + resolution: {integrity: sha512-+udSj47aRl5aKb0memBvcUG9koarqnxNM5yjuREvqwK6T3ap4mn3Zqqc17QrBFTqSMjr3HK1cvStEZpMDpfdyw==} + + '@vue/reactivity@3.5.22': + resolution: {integrity: sha512-f2Wux4v/Z2pqc9+4SmgZC1p73Z53fyD90NFWXiX9AKVnVBEvLFOWCEgJD3GdGnlxPZt01PSlfmLqbLYzY/Fw4A==} + + '@vue/runtime-core@3.5.22': + resolution: {integrity: sha512-EHo4W/eiYeAzRTN5PCextDUZ0dMs9I8mQ2Fy+OkzvRPUYQEyK9yAjbasrMCXbLNhF7P0OUyivLjIy0yc6VrLJQ==} + + '@vue/runtime-dom@3.5.22': + resolution: {integrity: sha512-Av60jsryAkI023PlN7LsqrfPvwfxOd2yAwtReCjeuugTJTkgrksYJJstg1e12qle0NarkfhfFu1ox2D+cQotww==} + + '@vue/server-renderer@3.5.22': + resolution: {integrity: sha512-gXjo+ao0oHYTSswF+a3KRHZ1WszxIqO7u6XwNHqcqb9JfyIL/pbWrrh/xLv7jeDqla9u+LK7yfZKHih1e1RKAQ==} + peerDependencies: + vue: 3.5.22 + + '@vue/shared@3.5.22': + resolution: {integrity: sha512-F4yc6palwq3TT0u+FYf0Ns4Tfl9GRFURDN2gWG7L1ecIaS/4fCIuFOjMTnCyjsu/OK6vaDKLCrGAa+KvvH+h4w==} + + '@vueuse/core@12.8.2': + resolution: {integrity: sha512-HbvCmZdzAu3VGi/pWYm5Ut+Kd9mn1ZHnn4L5G8kOQTPs/IwIAmJoBrmYk2ckLArgMXZj0AW3n5CAejLUO+PhdQ==} + + '@vueuse/integrations@12.8.2': + resolution: {integrity: sha512-fbGYivgK5uBTRt7p5F3zy6VrETlV9RtZjBqd1/HxGdjdckBgBM4ugP8LHpjolqTj14TXTxSK1ZfgPbHYyGuH7g==} + peerDependencies: + async-validator: ^4 + axios: ^1 + change-case: ^5 + drauu: ^0.4 + focus-trap: ^7 + fuse.js: ^7 + idb-keyval: ^6 + jwt-decode: ^4 + nprogress: ^0.2 + qrcode: ^1.5 + sortablejs: ^1 + universal-cookie: ^7 + peerDependenciesMeta: + async-validator: + optional: true + axios: + optional: true + change-case: + optional: true + drauu: + optional: true + focus-trap: + optional: true + fuse.js: + optional: true + idb-keyval: + optional: true + jwt-decode: + optional: true + nprogress: + optional: true + qrcode: + optional: true + sortablejs: + optional: true + universal-cookie: + optional: true + + '@vueuse/metadata@12.8.2': + resolution: {integrity: sha512-rAyLGEuoBJ/Il5AmFHiziCPdQzRt88VxR+Y/A/QhJ1EWtWqPBBAxTAFaSkviwEuOEZNtW8pvkPgoCZQ+HxqW1A==} + + '@vueuse/shared@12.8.2': + resolution: {integrity: sha512-dznP38YzxZoNloI0qpEfpkms8knDtaoQ6Y/sfS0L7Yki4zh40LFHEhur0odJC6xTHG5dxWVPiUWBXn+wCG2s5w==} + + acorn@8.15.0: + resolution: {integrity: sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==} + engines: {node: '>=0.4.0'} + hasBin: true + + algoliasearch@5.40.1: + resolution: {integrity: sha512-iUNxcXUNg9085TJx0HJLjqtDE0r1RZ0GOGrt8KNQqQT5ugu8lZsHuMUYW/e0lHhq6xBvmktU9Bw4CXP9VQeKrg==} + engines: {node: '>= 14.0.0'} + + argparse@2.0.1: + resolution: {integrity: sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==} + + birpc@2.6.1: + resolution: {integrity: sha512-LPnFhlDpdSH6FJhJyn4M0kFO7vtQ5iPw24FnG0y21q09xC7e8+1LeR31S1MAIrDAHp4m7aas4bEkTDTvMAtebQ==} + + ccount@2.0.1: + resolution: {integrity: sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==} + + character-entities-html4@2.1.0: + resolution: {integrity: sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==} + + character-entities-legacy@3.0.0: + resolution: {integrity: sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==} + + comma-separated-tokens@2.0.3: + resolution: {integrity: sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==} + + confbox@0.1.8: + resolution: {integrity: sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w==} + + confbox@0.2.2: + resolution: {integrity: sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==} + + copy-anything@3.0.5: + resolution: {integrity: sha512-yCEafptTtb4bk7GLEQoM8KVJpxAfdBJYaXyzQEgQQQgYrZiDp8SJmGKlYza6CYjEDNstAdNdKA3UuoULlEbS6w==} + engines: {node: '>=12.13'} + + csstype@3.1.3: + resolution: {integrity: sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==} + + debug@4.4.3: + resolution: {integrity: sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==} + engines: {node: '>=6.0'} + peerDependencies: + supports-color: '*' + peerDependenciesMeta: + supports-color: + optional: true + + dequal@2.0.3: + resolution: {integrity: sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==} + engines: {node: '>=6'} + + devlop@1.1.0: + resolution: {integrity: sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==} + + emoji-regex-xs@1.0.0: + resolution: {integrity: sha512-LRlerrMYoIDrT6jgpeZ2YYl/L8EulRTt5hQcYjy5AInh7HWXKimpqx68aknBFpGL2+/IcogTcaydJEgaTmOpDg==} + + entities@4.5.0: + resolution: {integrity: sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==} + engines: {node: '>=0.12'} + + esbuild@0.21.5: + resolution: {integrity: sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==} + engines: {node: '>=12'} + hasBin: true + + estree-walker@2.0.2: + resolution: {integrity: sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==} + + exsolve@1.0.7: + resolution: {integrity: sha512-VO5fQUzZtI6C+vx4w/4BWJpg3s/5l+6pRQEHzFRM8WFi4XffSP1Z+4qi7GbjWbvRQEbdIco5mIMq+zX4rPuLrw==} + + focus-trap@7.6.5: + resolution: {integrity: sha512-7Ke1jyybbbPZyZXFxEftUtxFGLMpE2n6A+z//m4CRDlj0hW+o3iYSmh8nFlYMurOiJVDmJRilUQtJr08KfIxlg==} + + fsevents@2.3.3: + resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==} + engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0} + os: [darwin] + + globals@15.15.0: + resolution: {integrity: sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg==} + engines: {node: '>=18'} + + hast-util-to-html@9.0.5: + resolution: {integrity: sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw==} + + hast-util-whitespace@3.0.0: + resolution: {integrity: sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw==} + + hookable@5.5.3: + resolution: {integrity: sha512-Yc+BQe8SvoXH1643Qez1zqLRmbA5rCL+sSmk6TVos0LWVfNIB7PGncdlId77WzLGSIB5KaWgTaNTs2lNVEI6VQ==} + + html-void-elements@3.0.0: + resolution: {integrity: sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg==} + + is-what@4.1.16: + resolution: {integrity: sha512-ZhMwEosbFJkA0YhFnNDgTM4ZxDRsS6HqTo7qsZM08fehyRYIYa0yHu5R6mgo1n/8MgaPBXiPimPD77baVFYg+A==} + engines: {node: '>=12.13'} + + kolorist@1.8.0: + resolution: {integrity: sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ==} + + linkify-it@5.0.0: + resolution: {integrity: sha512-5aHCbzQRADcdP+ATqnDuhhJ/MRIqDkZX5pyjFHRRysS8vZ5AbqGEoFIb6pYHPZ+L/OC2Lc+xT8uHVVR5CAK/wQ==} + + local-pkg@1.1.2: + resolution: {integrity: sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A==} + engines: {node: '>=14'} + + magic-string@0.30.19: + resolution: {integrity: sha512-2N21sPY9Ws53PZvsEpVtNuSW+ScYbQdp4b9qUaL+9QkHUrGFKo56Lg9Emg5s9V/qrtNBmiR01sYhUOwu3H+VOw==} + + mark.js@8.11.1: + resolution: {integrity: sha512-1I+1qpDt4idfgLQG+BNWmrqku+7/2bi5nLf4YwF8y8zXvmfiTBY3PV3ZibfrjBueCByROpuBjLLFCajqkgYoLQ==} + + markdown-it@14.1.0: + resolution: {integrity: sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg==} + hasBin: true + + mdast-util-to-hast@13.2.0: + resolution: {integrity: sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA==} + + mdurl@2.0.0: + resolution: {integrity: sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w==} + + micromark-util-character@2.1.1: + resolution: {integrity: sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q==} + + micromark-util-encode@2.0.1: + resolution: {integrity: sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw==} + + micromark-util-sanitize-uri@2.0.1: + resolution: {integrity: sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ==} + + micromark-util-symbol@2.0.1: + resolution: {integrity: sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q==} + + micromark-util-types@2.0.2: + resolution: {integrity: sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA==} + + minisearch@7.2.0: + resolution: {integrity: sha512-dqT2XBYUOZOiC5t2HRnwADjhNS2cecp9u+TJRiJ1Qp/f5qjkeT5APcGPjHw+bz89Ms8Jp+cG4AlE+QZ/QnDglg==} + + mitt@3.0.1: + resolution: {integrity: sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==} + + mlly@1.8.0: + resolution: {integrity: sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g==} + + ms@2.1.3: + resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==} + + nanoid@3.3.11: + resolution: {integrity: sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==} + engines: {node: ^10 || ^12 || ^13.7 || ^14 || >=15.0.1} + hasBin: true + + oniguruma-to-es@3.1.1: + resolution: {integrity: sha512-bUH8SDvPkH3ho3dvwJwfonjlQ4R80vjyvrU8YpxuROddv55vAEJrTuCuCVUhhsHbtlD9tGGbaNApGQckXhS8iQ==} + + package-manager-detector@1.5.0: + resolution: {integrity: sha512-uBj69dVlYe/+wxj8JOpr97XfsxH/eumMt6HqjNTmJDf/6NO9s+0uxeOneIz3AsPt2m6y9PqzDzd3ATcU17MNfw==} + + pathe@2.0.3: + resolution: {integrity: sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==} + + perfect-debounce@1.0.0: + resolution: {integrity: sha512-xCy9V055GLEqoFaHoC1SoLIaLmWctgCUaBaWxDZ7/Zx4CTyX7cJQLJOok/orfjZAh9kEYpjJa4d0KcJmCbctZA==} + + picocolors@1.1.1: + resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==} + + pkg-types@1.3.1: + resolution: {integrity: sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ==} + + pkg-types@2.3.0: + resolution: {integrity: sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig==} + + postcss@8.5.6: + resolution: {integrity: sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==} + engines: {node: ^10 || ^12 || >=14} + + preact@10.27.2: + resolution: {integrity: sha512-5SYSgFKSyhCbk6SrXyMpqjb5+MQBgfvEKE/OC+PujcY34sOpqtr+0AZQtPYx5IA6VxynQ7rUPCtKzyovpj9Bpg==} + + property-information@7.1.0: + resolution: {integrity: sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ==} + + punycode.js@2.3.1: + resolution: {integrity: sha512-uxFIHU0YlHYhDQtV4R9J6a52SLx28BCjT+4ieh7IGbgwVJWO+km431c4yRlREUAsAmt/uMjQUyQHNEPf0M39CA==} + engines: {node: '>=6'} + + quansync@0.2.11: + resolution: {integrity: sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA==} + + regex-recursion@6.0.2: + resolution: {integrity: sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg==} + + regex-utilities@2.3.0: + resolution: {integrity: sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng==} + + regex@6.0.1: + resolution: {integrity: sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA==} + + rfdc@1.4.1: + resolution: {integrity: sha512-q1b3N5QkRUWUl7iyylaaj3kOpIT0N2i9MqIEQXP73GVsN9cw3fdx8X63cEmWhJGi2PPCF23Ijp7ktmd39rawIA==} + + rollup@4.52.5: + resolution: {integrity: sha512-3GuObel8h7Kqdjt0gxkEzaifHTqLVW56Y/bjN7PSQtkKr0w3V/QYSdt6QWYtd7A1xUtYQigtdUfgj1RvWVtorw==} + engines: {node: '>=18.0.0', npm: '>=8.0.0'} + hasBin: true + + search-insights@2.17.3: + resolution: {integrity: sha512-RQPdCYTa8A68uM2jwxoY842xDhvx3E5LFL1LxvxCNMev4o5mLuokczhzjAgGwUZBAmOKZknArSxLKmXtIi2AxQ==} + + shiki@2.5.0: + resolution: {integrity: sha512-mI//trrsaiCIPsja5CNfsyNOqgAZUb6VpJA+340toL42UpzQlXpwRV9nch69X6gaUxrr9kaOOa6e3y3uAkGFxQ==} + + source-map-js@1.2.1: + resolution: {integrity: sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==} + engines: {node: '>=0.10.0'} + + space-separated-tokens@2.0.2: + resolution: {integrity: sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q==} + + speakingurl@14.0.1: + resolution: {integrity: sha512-1POYv7uv2gXoyGFpBCmpDVSNV74IfsWlDW216UPjbWufNf+bSU6GdbDsxdcxtfwb4xlI3yxzOTKClUosxARYrQ==} + engines: {node: '>=0.10.0'} + + stringify-entities@4.0.4: + resolution: {integrity: sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==} + + superjson@2.2.2: + resolution: {integrity: sha512-5JRxVqC8I8NuOUjzBbvVJAKNM8qoVuH0O77h4WInc/qC2q5IreqKxYwgkga3PfA22OayK2ikceb/B26dztPl+Q==} + engines: {node: '>=16'} + + tabbable@6.2.0: + resolution: {integrity: sha512-Cat63mxsVJlzYvN51JmVXIgNoUokrIaT2zLclCXjRd8boZ0004U4KCs/sToJ75C6sdlByWxpYnb5Boif1VSFew==} + + tinyexec@1.0.1: + resolution: {integrity: sha512-5uC6DDlmeqiOwCPmK9jMSdOuZTh8bU39Ys6yidB+UTt5hfZUPGAypSgFRiEp+jbi9qH40BLDvy85jIU88wKSqw==} + + trim-lines@3.0.1: + resolution: {integrity: sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg==} + + uc.micro@2.1.0: + resolution: {integrity: sha512-ARDJmphmdvUk6Glw7y9DQ2bFkKBHwQHLi2lsaH6PPmz/Ka9sFOBsBluozhDltWmnv9u/cF6Rt87znRTPV+yp/A==} + + ufo@1.6.1: + resolution: {integrity: sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA==} + + unist-util-is@6.0.1: + resolution: {integrity: sha512-LsiILbtBETkDz8I9p1dQ0uyRUWuaQzd/cuEeS1hoRSyW5E5XGmTzlwY1OrNzzakGowI9Dr/I8HVaw4hTtnxy8g==} + + unist-util-position@5.0.0: + resolution: {integrity: sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA==} + + unist-util-stringify-position@4.0.0: + resolution: {integrity: sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ==} + + unist-util-visit-parents@6.0.2: + resolution: {integrity: sha512-goh1s1TBrqSqukSc8wrjwWhL0hiJxgA8m4kFxGlQ+8FYQ3C/m11FcTs4YYem7V664AhHVvgoQLk890Ssdsr2IQ==} + + unist-util-visit@5.0.0: + resolution: {integrity: sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg==} + + vfile-message@4.0.3: + resolution: {integrity: sha512-QTHzsGd1EhbZs4AsQ20JX1rC3cOlt/IWJruk893DfLRr57lcnOeMaWG4K0JrRta4mIJZKth2Au3mM3u03/JWKw==} + + vfile@6.0.3: + resolution: {integrity: sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q==} + + vite@5.4.21: + resolution: {integrity: sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==} + engines: {node: ^18.0.0 || >=20.0.0} + hasBin: true + peerDependencies: + '@types/node': ^18.0.0 || >=20.0.0 + less: '*' + lightningcss: ^1.21.0 + sass: '*' + sass-embedded: '*' + stylus: '*' + sugarss: '*' + terser: ^5.4.0 + peerDependenciesMeta: + '@types/node': + optional: true + less: + optional: true + lightningcss: + optional: true + sass: + optional: true + sass-embedded: + optional: true + stylus: + optional: true + sugarss: + optional: true + terser: + optional: true + + vitepress-plugin-group-icons@1.6.4: + resolution: {integrity: sha512-YCFH0G2zTX/me51wooWy4SvaaA6VKjIxLoWDU9ON4rFx9907Yf9ZpCpa4JpwloVuvm5+82fqLXSuZ98EJ92UUQ==} + peerDependencies: + markdown-it: '>=14' + vite: '>=3' + + vitepress-plugin-tabs@0.7.3: + resolution: {integrity: sha512-CkUz49UrTLcVOszuiHIA7ZBvfsg9RluRkFjRG1KvCg/NwuOTLZwcBRv7vBB3vMlDp0bWXIFOIwdI7bE93cV3Hw==} + peerDependencies: + vitepress: ^1.0.0 + vue: ^3.5.0 + + vitepress@1.6.4: + resolution: {integrity: sha512-+2ym1/+0VVrbhNyRoFFesVvBvHAVMZMK0rw60E3X/5349M1GuVdKeazuksqopEdvkKwKGs21Q729jX81/bkBJg==} + hasBin: true + peerDependencies: + markdown-it-mathjax3: ^4 + postcss: ^8 + peerDependenciesMeta: + markdown-it-mathjax3: + optional: true + postcss: + optional: true + + vue@3.5.22: + resolution: {integrity: sha512-toaZjQ3a/G/mYaLSbV+QsQhIdMo9x5rrqIpYRObsJ6T/J+RyCSFwN2LHNVH9v8uIcljDNa3QzPVdv3Y6b9hAJQ==} + peerDependencies: + typescript: '*' + peerDependenciesMeta: + typescript: + optional: true + + zwitch@2.0.4: + resolution: {integrity: sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==} + +snapshots: + + '@algolia/abtesting@1.6.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/autocomplete-core@1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)(search-insights@2.17.3)': + dependencies: + '@algolia/autocomplete-plugin-algolia-insights': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)(search-insights@2.17.3) + '@algolia/autocomplete-shared': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1) + transitivePeerDependencies: + - '@algolia/client-search' + - algoliasearch + - search-insights + + '@algolia/autocomplete-plugin-algolia-insights@1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)(search-insights@2.17.3)': + dependencies: + '@algolia/autocomplete-shared': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1) + search-insights: 2.17.3 + transitivePeerDependencies: + - '@algolia/client-search' + - algoliasearch + + '@algolia/autocomplete-preset-algolia@1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)': + dependencies: + '@algolia/autocomplete-shared': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1) + '@algolia/client-search': 5.40.1 + algoliasearch: 5.40.1 + + '@algolia/autocomplete-shared@1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)': + dependencies: + '@algolia/client-search': 5.40.1 + algoliasearch: 5.40.1 + + '@algolia/client-abtesting@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/client-analytics@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/client-common@5.40.1': {} + + '@algolia/client-insights@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/client-personalization@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/client-query-suggestions@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/client-search@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/ingestion@1.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/monitoring@1.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/recommend@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + '@algolia/requester-browser-xhr@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + + '@algolia/requester-fetch@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + + '@algolia/requester-node-http@5.40.1': + dependencies: + '@algolia/client-common': 5.40.1 + + '@antfu/install-pkg@1.1.0': + dependencies: + package-manager-detector: 1.5.0 + tinyexec: 1.0.1 + + '@antfu/utils@9.3.0': {} + + '@babel/helper-string-parser@7.27.1': {} + + '@babel/helper-validator-identifier@7.27.1': {} + + '@babel/parser@7.28.4': + dependencies: + '@babel/types': 7.28.4 + + '@babel/types@7.28.4': + dependencies: + '@babel/helper-string-parser': 7.27.1 + '@babel/helper-validator-identifier': 7.27.1 + + '@docsearch/css@3.8.2': {} + + '@docsearch/js@3.8.2(@algolia/client-search@5.40.1)(search-insights@2.17.3)': + dependencies: + '@docsearch/react': 3.8.2(@algolia/client-search@5.40.1)(search-insights@2.17.3) + preact: 10.27.2 + transitivePeerDependencies: + - '@algolia/client-search' + - '@types/react' + - react + - react-dom + - search-insights + + '@docsearch/react@3.8.2(@algolia/client-search@5.40.1)(search-insights@2.17.3)': + dependencies: + '@algolia/autocomplete-core': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1)(search-insights@2.17.3) + '@algolia/autocomplete-preset-algolia': 1.17.7(@algolia/client-search@5.40.1)(algoliasearch@5.40.1) + '@docsearch/css': 3.8.2 + algoliasearch: 5.40.1 + optionalDependencies: + search-insights: 2.17.3 + transitivePeerDependencies: + - '@algolia/client-search' + + '@esbuild/aix-ppc64@0.21.5': + optional: true + + '@esbuild/android-arm64@0.21.5': + optional: true + + '@esbuild/android-arm@0.21.5': + optional: true + + '@esbuild/android-x64@0.21.5': + optional: true + + '@esbuild/darwin-arm64@0.21.5': + optional: true + + '@esbuild/darwin-x64@0.21.5': + optional: true + + '@esbuild/freebsd-arm64@0.21.5': + optional: true + + '@esbuild/freebsd-x64@0.21.5': + optional: true + + '@esbuild/linux-arm64@0.21.5': + optional: true + + '@esbuild/linux-arm@0.21.5': + optional: true + + '@esbuild/linux-ia32@0.21.5': + optional: true + + '@esbuild/linux-loong64@0.21.5': + optional: true + + '@esbuild/linux-mips64el@0.21.5': + optional: true + + '@esbuild/linux-ppc64@0.21.5': + optional: true + + '@esbuild/linux-riscv64@0.21.5': + optional: true + + '@esbuild/linux-s390x@0.21.5': + optional: true + + '@esbuild/linux-x64@0.21.5': + optional: true + + '@esbuild/netbsd-x64@0.21.5': + optional: true + + '@esbuild/openbsd-x64@0.21.5': + optional: true + + '@esbuild/sunos-x64@0.21.5': + optional: true + + '@esbuild/win32-arm64@0.21.5': + optional: true + + '@esbuild/win32-ia32@0.21.5': + optional: true + + '@esbuild/win32-x64@0.21.5': + optional: true + + '@iconify-json/logos@1.2.9': + dependencies: + '@iconify/types': 2.0.0 + + '@iconify-json/simple-icons@1.2.55': + dependencies: + '@iconify/types': 2.0.0 + + '@iconify-json/vscode-icons@1.2.32': + dependencies: + '@iconify/types': 2.0.0 + + '@iconify/types@2.0.0': {} + + '@iconify/utils@3.0.2': + dependencies: + '@antfu/install-pkg': 1.1.0 + '@antfu/utils': 9.3.0 + '@iconify/types': 2.0.0 + debug: 4.4.3 + globals: 15.15.0 + kolorist: 1.8.0 + local-pkg: 1.1.2 + mlly: 1.8.0 + transitivePeerDependencies: + - supports-color + + '@jridgewell/sourcemap-codec@1.5.5': {} + + '@rollup/rollup-android-arm-eabi@4.52.5': + optional: true + + '@rollup/rollup-android-arm64@4.52.5': + optional: true + + '@rollup/rollup-darwin-arm64@4.52.5': + optional: true + + '@rollup/rollup-darwin-x64@4.52.5': + optional: true + + '@rollup/rollup-freebsd-arm64@4.52.5': + optional: true + + '@rollup/rollup-freebsd-x64@4.52.5': + optional: true + + '@rollup/rollup-linux-arm-gnueabihf@4.52.5': + optional: true + + '@rollup/rollup-linux-arm-musleabihf@4.52.5': + optional: true + + '@rollup/rollup-linux-arm64-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-arm64-musl@4.52.5': + optional: true + + '@rollup/rollup-linux-loong64-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-ppc64-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-riscv64-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-riscv64-musl@4.52.5': + optional: true + + '@rollup/rollup-linux-s390x-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-x64-gnu@4.52.5': + optional: true + + '@rollup/rollup-linux-x64-musl@4.52.5': + optional: true + + '@rollup/rollup-openharmony-arm64@4.52.5': + optional: true + + '@rollup/rollup-win32-arm64-msvc@4.52.5': + optional: true + + '@rollup/rollup-win32-ia32-msvc@4.52.5': + optional: true + + '@rollup/rollup-win32-x64-gnu@4.52.5': + optional: true + + '@rollup/rollup-win32-x64-msvc@4.52.5': + optional: true + + '@shikijs/core@2.5.0': + dependencies: + '@shikijs/engine-javascript': 2.5.0 + '@shikijs/engine-oniguruma': 2.5.0 + '@shikijs/types': 2.5.0 + '@shikijs/vscode-textmate': 10.0.2 + '@types/hast': 3.0.4 + hast-util-to-html: 9.0.5 + + '@shikijs/engine-javascript@2.5.0': + dependencies: + '@shikijs/types': 2.5.0 + '@shikijs/vscode-textmate': 10.0.2 + oniguruma-to-es: 3.1.1 + + '@shikijs/engine-oniguruma@2.5.0': + dependencies: + '@shikijs/types': 2.5.0 + '@shikijs/vscode-textmate': 10.0.2 + + '@shikijs/langs@2.5.0': + dependencies: + '@shikijs/types': 2.5.0 + + '@shikijs/themes@2.5.0': + dependencies: + '@shikijs/types': 2.5.0 + + '@shikijs/transformers@2.5.0': + dependencies: + '@shikijs/core': 2.5.0 + '@shikijs/types': 2.5.0 + + '@shikijs/types@2.5.0': + dependencies: + '@shikijs/vscode-textmate': 10.0.2 + '@types/hast': 3.0.4 + + '@shikijs/vscode-textmate@10.0.2': {} + + '@types/estree@1.0.8': {} + + '@types/hast@3.0.4': + dependencies: + '@types/unist': 3.0.3 + + '@types/linkify-it@5.0.0': {} + + '@types/markdown-it@14.1.2': + dependencies: + '@types/linkify-it': 5.0.0 + '@types/mdurl': 2.0.0 + + '@types/mdast@4.0.4': + dependencies: + '@types/unist': 3.0.3 + + '@types/mdurl@2.0.0': {} + + '@types/unist@3.0.3': {} + + '@types/web-bluetooth@0.0.21': {} + + '@ungap/structured-clone@1.3.0': {} + + '@vitejs/plugin-vue@5.2.4(vite@5.4.21)(vue@3.5.22)': + dependencies: + vite: 5.4.21 + vue: 3.5.22 + + '@vue/compiler-core@3.5.22': + dependencies: + '@babel/parser': 7.28.4 + '@vue/shared': 3.5.22 + entities: 4.5.0 + estree-walker: 2.0.2 + source-map-js: 1.2.1 + + '@vue/compiler-dom@3.5.22': + dependencies: + '@vue/compiler-core': 3.5.22 + '@vue/shared': 3.5.22 + + '@vue/compiler-sfc@3.5.22': + dependencies: + '@babel/parser': 7.28.4 + '@vue/compiler-core': 3.5.22 + '@vue/compiler-dom': 3.5.22 + '@vue/compiler-ssr': 3.5.22 + '@vue/shared': 3.5.22 + estree-walker: 2.0.2 + magic-string: 0.30.19 + postcss: 8.5.6 + source-map-js: 1.2.1 + + '@vue/compiler-ssr@3.5.22': + dependencies: + '@vue/compiler-dom': 3.5.22 + '@vue/shared': 3.5.22 + + '@vue/devtools-api@7.7.7': + dependencies: + '@vue/devtools-kit': 7.7.7 + + '@vue/devtools-kit@7.7.7': + dependencies: + '@vue/devtools-shared': 7.7.7 + birpc: 2.6.1 + hookable: 5.5.3 + mitt: 3.0.1 + perfect-debounce: 1.0.0 + speakingurl: 14.0.1 + superjson: 2.2.2 + + '@vue/devtools-shared@7.7.7': + dependencies: + rfdc: 1.4.1 + + '@vue/reactivity@3.5.22': + dependencies: + '@vue/shared': 3.5.22 + + '@vue/runtime-core@3.5.22': + dependencies: + '@vue/reactivity': 3.5.22 + '@vue/shared': 3.5.22 + + '@vue/runtime-dom@3.5.22': + dependencies: + '@vue/reactivity': 3.5.22 + '@vue/runtime-core': 3.5.22 + '@vue/shared': 3.5.22 + csstype: 3.1.3 + + '@vue/server-renderer@3.5.22(vue@3.5.22)': + dependencies: + '@vue/compiler-ssr': 3.5.22 + '@vue/shared': 3.5.22 + vue: 3.5.22 + + '@vue/shared@3.5.22': {} + + '@vueuse/core@12.8.2': + dependencies: + '@types/web-bluetooth': 0.0.21 + '@vueuse/metadata': 12.8.2 + '@vueuse/shared': 12.8.2 + vue: 3.5.22 + transitivePeerDependencies: + - typescript + + '@vueuse/integrations@12.8.2(focus-trap@7.6.5)': + dependencies: + '@vueuse/core': 12.8.2 + '@vueuse/shared': 12.8.2 + vue: 3.5.22 + optionalDependencies: + focus-trap: 7.6.5 + transitivePeerDependencies: + - typescript + + '@vueuse/metadata@12.8.2': {} + + '@vueuse/shared@12.8.2': + dependencies: + vue: 3.5.22 + transitivePeerDependencies: + - typescript + + acorn@8.15.0: {} + + algoliasearch@5.40.1: + dependencies: + '@algolia/abtesting': 1.6.1 + '@algolia/client-abtesting': 5.40.1 + '@algolia/client-analytics': 5.40.1 + '@algolia/client-common': 5.40.1 + '@algolia/client-insights': 5.40.1 + '@algolia/client-personalization': 5.40.1 + '@algolia/client-query-suggestions': 5.40.1 + '@algolia/client-search': 5.40.1 + '@algolia/ingestion': 1.40.1 + '@algolia/monitoring': 1.40.1 + '@algolia/recommend': 5.40.1 + '@algolia/requester-browser-xhr': 5.40.1 + '@algolia/requester-fetch': 5.40.1 + '@algolia/requester-node-http': 5.40.1 + + argparse@2.0.1: {} + + birpc@2.6.1: {} + + ccount@2.0.1: {} + + character-entities-html4@2.1.0: {} + + character-entities-legacy@3.0.0: {} + + comma-separated-tokens@2.0.3: {} + + confbox@0.1.8: {} + + confbox@0.2.2: {} + + copy-anything@3.0.5: + dependencies: + is-what: 4.1.16 + + csstype@3.1.3: {} + + debug@4.4.3: + dependencies: + ms: 2.1.3 + + dequal@2.0.3: {} + + devlop@1.1.0: + dependencies: + dequal: 2.0.3 + + emoji-regex-xs@1.0.0: {} + + entities@4.5.0: {} + + esbuild@0.21.5: + optionalDependencies: + '@esbuild/aix-ppc64': 0.21.5 + '@esbuild/android-arm': 0.21.5 + '@esbuild/android-arm64': 0.21.5 + '@esbuild/android-x64': 0.21.5 + '@esbuild/darwin-arm64': 0.21.5 + '@esbuild/darwin-x64': 0.21.5 + '@esbuild/freebsd-arm64': 0.21.5 + '@esbuild/freebsd-x64': 0.21.5 + '@esbuild/linux-arm': 0.21.5 + '@esbuild/linux-arm64': 0.21.5 + '@esbuild/linux-ia32': 0.21.5 + '@esbuild/linux-loong64': 0.21.5 + '@esbuild/linux-mips64el': 0.21.5 + '@esbuild/linux-ppc64': 0.21.5 + '@esbuild/linux-riscv64': 0.21.5 + '@esbuild/linux-s390x': 0.21.5 + '@esbuild/linux-x64': 0.21.5 + '@esbuild/netbsd-x64': 0.21.5 + '@esbuild/openbsd-x64': 0.21.5 + '@esbuild/sunos-x64': 0.21.5 + '@esbuild/win32-arm64': 0.21.5 + '@esbuild/win32-ia32': 0.21.5 + '@esbuild/win32-x64': 0.21.5 + + estree-walker@2.0.2: {} + + exsolve@1.0.7: {} + + focus-trap@7.6.5: + dependencies: + tabbable: 6.2.0 + + fsevents@2.3.3: + optional: true + + globals@15.15.0: {} + + hast-util-to-html@9.0.5: + dependencies: + '@types/hast': 3.0.4 + '@types/unist': 3.0.3 + ccount: 2.0.1 + comma-separated-tokens: 2.0.3 + hast-util-whitespace: 3.0.0 + html-void-elements: 3.0.0 + mdast-util-to-hast: 13.2.0 + property-information: 7.1.0 + space-separated-tokens: 2.0.2 + stringify-entities: 4.0.4 + zwitch: 2.0.4 + + hast-util-whitespace@3.0.0: + dependencies: + '@types/hast': 3.0.4 + + hookable@5.5.3: {} + + html-void-elements@3.0.0: {} + + is-what@4.1.16: {} + + kolorist@1.8.0: {} + + linkify-it@5.0.0: + dependencies: + uc.micro: 2.1.0 + + local-pkg@1.1.2: + dependencies: + mlly: 1.8.0 + pkg-types: 2.3.0 + quansync: 0.2.11 + + magic-string@0.30.19: + dependencies: + '@jridgewell/sourcemap-codec': 1.5.5 + + mark.js@8.11.1: {} + + markdown-it@14.1.0: + dependencies: + argparse: 2.0.1 + entities: 4.5.0 + linkify-it: 5.0.0 + mdurl: 2.0.0 + punycode.js: 2.3.1 + uc.micro: 2.1.0 + + mdast-util-to-hast@13.2.0: + dependencies: + '@types/hast': 3.0.4 + '@types/mdast': 4.0.4 + '@ungap/structured-clone': 1.3.0 + devlop: 1.1.0 + micromark-util-sanitize-uri: 2.0.1 + trim-lines: 3.0.1 + unist-util-position: 5.0.0 + unist-util-visit: 5.0.0 + vfile: 6.0.3 + + mdurl@2.0.0: {} + + micromark-util-character@2.1.1: + dependencies: + micromark-util-symbol: 2.0.1 + micromark-util-types: 2.0.2 + + micromark-util-encode@2.0.1: {} + + micromark-util-sanitize-uri@2.0.1: + dependencies: + micromark-util-character: 2.1.1 + micromark-util-encode: 2.0.1 + micromark-util-symbol: 2.0.1 + + micromark-util-symbol@2.0.1: {} + + micromark-util-types@2.0.2: {} + + minisearch@7.2.0: {} + + mitt@3.0.1: {} + + mlly@1.8.0: + dependencies: + acorn: 8.15.0 + pathe: 2.0.3 + pkg-types: 1.3.1 + ufo: 1.6.1 + + ms@2.1.3: {} + + nanoid@3.3.11: {} + + oniguruma-to-es@3.1.1: + dependencies: + emoji-regex-xs: 1.0.0 + regex: 6.0.1 + regex-recursion: 6.0.2 + + package-manager-detector@1.5.0: {} + + pathe@2.0.3: {} + + perfect-debounce@1.0.0: {} + + picocolors@1.1.1: {} + + pkg-types@1.3.1: + dependencies: + confbox: 0.1.8 + mlly: 1.8.0 + pathe: 2.0.3 + + pkg-types@2.3.0: + dependencies: + confbox: 0.2.2 + exsolve: 1.0.7 + pathe: 2.0.3 + + postcss@8.5.6: + dependencies: + nanoid: 3.3.11 + picocolors: 1.1.1 + source-map-js: 1.2.1 + + preact@10.27.2: {} + + property-information@7.1.0: {} + + punycode.js@2.3.1: {} + + quansync@0.2.11: {} + + regex-recursion@6.0.2: + dependencies: + regex-utilities: 2.3.0 + + regex-utilities@2.3.0: {} + + regex@6.0.1: + dependencies: + regex-utilities: 2.3.0 + + rfdc@1.4.1: {} + + rollup@4.52.5: + dependencies: + '@types/estree': 1.0.8 + optionalDependencies: + '@rollup/rollup-android-arm-eabi': 4.52.5 + '@rollup/rollup-android-arm64': 4.52.5 + '@rollup/rollup-darwin-arm64': 4.52.5 + '@rollup/rollup-darwin-x64': 4.52.5 + '@rollup/rollup-freebsd-arm64': 4.52.5 + '@rollup/rollup-freebsd-x64': 4.52.5 + '@rollup/rollup-linux-arm-gnueabihf': 4.52.5 + '@rollup/rollup-linux-arm-musleabihf': 4.52.5 + '@rollup/rollup-linux-arm64-gnu': 4.52.5 + '@rollup/rollup-linux-arm64-musl': 4.52.5 + '@rollup/rollup-linux-loong64-gnu': 4.52.5 + '@rollup/rollup-linux-ppc64-gnu': 4.52.5 + '@rollup/rollup-linux-riscv64-gnu': 4.52.5 + '@rollup/rollup-linux-riscv64-musl': 4.52.5 + '@rollup/rollup-linux-s390x-gnu': 4.52.5 + '@rollup/rollup-linux-x64-gnu': 4.52.5 + '@rollup/rollup-linux-x64-musl': 4.52.5 + '@rollup/rollup-openharmony-arm64': 4.52.5 + '@rollup/rollup-win32-arm64-msvc': 4.52.5 + '@rollup/rollup-win32-ia32-msvc': 4.52.5 + '@rollup/rollup-win32-x64-gnu': 4.52.5 + '@rollup/rollup-win32-x64-msvc': 4.52.5 + fsevents: 2.3.3 + + search-insights@2.17.3: {} + + shiki@2.5.0: + dependencies: + '@shikijs/core': 2.5.0 + '@shikijs/engine-javascript': 2.5.0 + '@shikijs/engine-oniguruma': 2.5.0 + '@shikijs/langs': 2.5.0 + '@shikijs/themes': 2.5.0 + '@shikijs/types': 2.5.0 + '@shikijs/vscode-textmate': 10.0.2 + '@types/hast': 3.0.4 + + source-map-js@1.2.1: {} + + space-separated-tokens@2.0.2: {} + + speakingurl@14.0.1: {} + + stringify-entities@4.0.4: + dependencies: + character-entities-html4: 2.1.0 + character-entities-legacy: 3.0.0 + + superjson@2.2.2: + dependencies: + copy-anything: 3.0.5 + + tabbable@6.2.0: {} + + tinyexec@1.0.1: {} + + trim-lines@3.0.1: {} + + uc.micro@2.1.0: {} + + ufo@1.6.1: {} + + unist-util-is@6.0.1: + dependencies: + '@types/unist': 3.0.3 + + unist-util-position@5.0.0: + dependencies: + '@types/unist': 3.0.3 + + unist-util-stringify-position@4.0.0: + dependencies: + '@types/unist': 3.0.3 + + unist-util-visit-parents@6.0.2: + dependencies: + '@types/unist': 3.0.3 + unist-util-is: 6.0.1 + + unist-util-visit@5.0.0: + dependencies: + '@types/unist': 3.0.3 + unist-util-is: 6.0.1 + unist-util-visit-parents: 6.0.2 + + vfile-message@4.0.3: + dependencies: + '@types/unist': 3.0.3 + unist-util-stringify-position: 4.0.0 + + vfile@6.0.3: + dependencies: + '@types/unist': 3.0.3 + vfile-message: 4.0.3 + + vite@5.4.21: + dependencies: + esbuild: 0.21.5 + postcss: 8.5.6 + rollup: 4.52.5 + optionalDependencies: + fsevents: 2.3.3 + + vitepress-plugin-group-icons@1.6.4(markdown-it@14.1.0)(vite@5.4.21): + dependencies: + '@iconify-json/logos': 1.2.9 + '@iconify-json/vscode-icons': 1.2.32 + '@iconify/utils': 3.0.2 + markdown-it: 14.1.0 + vite: 5.4.21 + transitivePeerDependencies: + - supports-color + + vitepress-plugin-tabs@0.7.3(vitepress@1.6.4(@algolia/client-search@5.40.1)(postcss@8.5.6)(search-insights@2.17.3))(vue@3.5.22): + dependencies: + vitepress: 1.6.4(@algolia/client-search@5.40.1)(postcss@8.5.6)(search-insights@2.17.3) + vue: 3.5.22 + + vitepress@1.6.4(@algolia/client-search@5.40.1)(postcss@8.5.6)(search-insights@2.17.3): + dependencies: + '@docsearch/css': 3.8.2 + '@docsearch/js': 3.8.2(@algolia/client-search@5.40.1)(search-insights@2.17.3) + '@iconify-json/simple-icons': 1.2.55 + '@shikijs/core': 2.5.0 + '@shikijs/transformers': 2.5.0 + '@shikijs/types': 2.5.0 + '@types/markdown-it': 14.1.2 + '@vitejs/plugin-vue': 5.2.4(vite@5.4.21)(vue@3.5.22) + '@vue/devtools-api': 7.7.7 + '@vue/shared': 3.5.22 + '@vueuse/core': 12.8.2 + '@vueuse/integrations': 12.8.2(focus-trap@7.6.5) + focus-trap: 7.6.5 + mark.js: 8.11.1 + minisearch: 7.2.0 + shiki: 2.5.0 + vite: 5.4.21 + vue: 3.5.22 + optionalDependencies: + postcss: 8.5.6 + transitivePeerDependencies: + - '@algolia/client-search' + - '@types/node' + - '@types/react' + - async-validator + - axios + - change-case + - drauu + - fuse.js + - idb-keyval + - jwt-decode + - less + - lightningcss + - nprogress + - qrcode + - react + - react-dom + - sass + - sass-embedded + - search-insights + - sortablejs + - stylus + - sugarss + - terser + - typescript + - universal-cookie + + vue@3.5.22: + dependencies: + '@vue/compiler-dom': 3.5.22 + '@vue/compiler-sfc': 3.5.22 + '@vue/runtime-dom': 3.5.22 + '@vue/server-renderer': 3.5.22(vue@3.5.22) + '@vue/shared': 3.5.22 + + zwitch@2.0.4: {} diff --git a/pnpm-workspace.yaml b/pnpm-workspace.yaml new file mode 100644 index 00000000..efc037aa --- /dev/null +++ b/pnpm-workspace.yaml @@ -0,0 +1,2 @@ +onlyBuiltDependencies: + - esbuild diff --git a/test/action_prompt/action_schemas_test.rb b/test/action_prompt/action_schemas_test.rb deleted file mode 100644 index e33fda5f..00000000 --- a/test/action_prompt/action_schemas_test.rb +++ /dev/null @@ -1,18 +0,0 @@ -require "test_helper" - -class DummyAgent < ApplicationAgent - def foo - prompt - end -end - -class ActionSchemasTest < ActiveSupport::TestCase - test "skip missing json templates" do - agent = DummyAgent.new - - assert_nothing_raised do - schemas = agent.action_schemas - assert_equal [], schemas - end - end -end diff --git a/test/action_prompt/message_json_parsing_test.rb b/test/action_prompt/message_json_parsing_test.rb deleted file mode 100644 index 59c4d22d..00000000 --- a/test/action_prompt/message_json_parsing_test.rb +++ /dev/null @@ -1,75 +0,0 @@ -# frozen_string_literal: true - -require "test_helper" -require "active_agent/action_prompt/message" - -class MessageJsonParsingTest < ActiveSupport::TestCase - test "automatically parses JSON content when content_type is application/json" do - json_string = '{"name": "John", "age": 30, "active": true}' - - message = ActiveAgent::ActionPrompt::Message.new( - content: json_string, - content_type: "application/json", - role: :assistant - ) - - assert message.content.is_a?(Hash) - assert_equal "John", message.content["name"] - assert_equal 30, message.content["age"] - assert_equal true, message.content["active"] - - # Raw content should still be available - assert_equal json_string, message.raw_content - end - - test "returns raw content if JSON parsing fails" do - invalid_json = "{invalid json}" - - message = ActiveAgent::ActionPrompt::Message.new( - content: invalid_json, - content_type: "application/json", - role: :assistant - ) - - assert message.content.is_a?(String) - assert_equal invalid_json, message.content - assert_equal invalid_json, message.raw_content - end - - test "does not parse content when content_type is not JSON" do - json_like_string = '{"looks": "like json"}' - - message = ActiveAgent::ActionPrompt::Message.new( - content: json_like_string, - content_type: "text/plain", - role: :assistant - ) - - assert message.content.is_a?(String) - assert_equal json_like_string, message.content - end - - test "handles empty content gracefully" do - message = ActiveAgent::ActionPrompt::Message.new( - content: "", - content_type: "application/json", - role: :assistant - ) - - assert_equal "", message.content - assert_equal "", message.raw_content - end - - test "preserves non-string content as-is" do - hash_content = { already: "parsed" } - - message = ActiveAgent::ActionPrompt::Message.new( - content: hash_content, - content_type: "application/json", - role: :assistant - ) - - assert_equal hash_content, message.content - assert_equal hash_content, message.raw_content - end -end diff --git a/test/action_prompt/message_test.rb b/test/action_prompt/message_test.rb deleted file mode 100644 index 5b2b2753..00000000 --- a/test/action_prompt/message_test.rb +++ /dev/null @@ -1,15 +0,0 @@ -require "test_helper" - -module ActiveAgent - module ActionPrompt - class MessageTest < ActiveSupport::TestCase - test "array for message hashes to messages" do - messages = [ - { content: "Instructions", role: :system }, - { content: "This is a message", role: :user } - ] - assert Message.from_messages(messages).first.is_a? Message - end - end - end -end diff --git a/test/action_prompt/multi_turn_tool_calling_test.rb b/test/action_prompt/multi_turn_tool_calling_test.rb deleted file mode 100644 index 0dc31040..00000000 --- a/test/action_prompt/multi_turn_tool_calling_test.rb +++ /dev/null @@ -1,300 +0,0 @@ -require "test_helper" -require "active_agent/action_prompt/base" -require "active_agent/action_prompt/prompt" -require "active_agent/action_prompt/message" -require "active_agent/action_prompt/action" - -module ActiveAgent - module ActionPrompt - class MultiTurnToolCallingTest < ActiveSupport::TestCase - class TestToolAgent < ActiveAgent::ActionPrompt::Base - attr_accessor :tool_results - - def initialize - super - @tool_results = {} - end - - def search_web - @tool_results[:search_web] = "Found 10 results for #{params[:query]}" - # Call prompt with a message body to generate the tool response - prompt(message: @tool_results[:search_web]) - end - - def get_weather - @tool_results[:get_weather] = "Weather in #{params[:location]}: Sunny, 72°F" - # Call prompt with a message body to generate the tool response - prompt(message: @tool_results[:get_weather]) - end - - def calculate - result = eval(params[:expression]) - @tool_results[:calculate] = "Result: #{result}" - # Call prompt with a message body to generate the tool response - prompt(message: @tool_results[:calculate]) - end - end - - setup do - @agent = TestToolAgent.new - @agent.context.messages << Message.new(role: :system, content: "You are a helpful assistant.") - @agent.context.messages << Message.new(role: :user, content: "What's the weather in NYC and search for restaurants there?") - end - - test "assistant message with tool_calls is preserved when performing actions" do - # Create a mock response with tool calls - assistant_message = Message.new( - role: :assistant, - content: "I'll help you with that. Let me check the weather and search for restaurants in NYC.", - action_requested: true, - raw_actions: [ - { - "id" => "call_001", - "type" => "function", - "function" => { - "name" => "get_weather", - "arguments" => '{"location": "NYC"}' - } - } - ], - requested_actions: [ - Action.new( - id: "call_001", - name: "get_weather", - params: { location: "NYC" } - ) - ] - ) - - # Add assistant message to context (simulating what update_context does) - @agent.context.messages << assistant_message - - # Perform the action - @agent.send(:perform_action, assistant_message.requested_actions.first) - - # Verify the assistant message is still there - assistant_messages = @agent.context.messages.select { |m| m.role == :assistant } - assert_equal 1, assistant_messages.count - assert_equal assistant_message, assistant_messages.first - assert assistant_messages.first.raw_actions.present? - - # Verify the tool response was added - tool_messages = @agent.context.messages.select { |m| m.role == :tool } - assert_equal 1, tool_messages.count - assert_equal "call_001", tool_messages.first.action_id - assert_equal "get_weather", tool_messages.first.action_name - end - - test "tool response messages have correct action_id matching tool_call id" do - action = Action.new( - id: "call_abc123", - name: "search_web", - params: { query: "NYC restaurants" } - ) - - # Add an assistant message with tool_calls - @agent.context.messages << Message.new( - role: :assistant, - content: "Searching for restaurants", - raw_actions: [ { - "id" => "call_abc123", - "type" => "function", - "function" => { - "name" => "search_web", - "arguments" => '{"query": "NYC restaurants"}' - } - } ] - ) - - @agent.send(:perform_action, action) - - tool_message = @agent.context.messages.last - assert_equal :tool, tool_message.role - assert_equal "call_abc123", tool_message.action_id - assert_equal action.id, tool_message.action_id - end - - test "multiple tool calls result in correct message sequence" do - # First tool call - first_assistant = Message.new( - role: :assistant, - content: "Getting weather first", - action_requested: true, - raw_actions: [ { - "id" => "call_001", - "type" => "function", - "function" => { "name" => "get_weather", "arguments" => '{"location": "NYC"}' } - } ], - requested_actions: [ - Action.new(id: "call_001", name: "get_weather", params: { location: "NYC" }) - ] - ) - - @agent.context.messages << first_assistant - @agent.send(:perform_action, first_assistant.requested_actions.first) - - # Second tool call - second_assistant = Message.new( - role: :assistant, - content: "Now searching for restaurants", - action_requested: true, - raw_actions: [ { - "id" => "call_002", - "type" => "function", - "function" => { "name" => "search_web", "arguments" => '{"query": "NYC restaurants"}' } - } ], - requested_actions: [ - Action.new(id: "call_002", name: "search_web", params: { query: "NYC restaurants" }) - ] - ) - - @agent.context.messages << second_assistant - @agent.send(:perform_action, second_assistant.requested_actions.first) - - # Verify message sequence - messages = @agent.context.messages - - # Filter to get the main messages (system, user, assistants, tools) - system_messages = messages.select { |m| m.role == :system } - user_messages = messages.select { |m| m.role == :user } - assistant_messages = messages.select { |m| m.role == :assistant } - tool_messages = messages.select { |m| m.role == :tool } - - # Agent starts with empty system message, plus the one we added in setup - assert_equal 2, system_messages.count - assert_equal 1, user_messages.count - assert_equal 2, assistant_messages.count - assert_equal 2, tool_messages.count - - # Verify tool response IDs match - assert_equal "call_001", tool_messages[0].action_id - assert_equal "call_002", tool_messages[1].action_id - end - - test "perform_actions handles multiple actions from single response" do - actions = [ - Action.new(id: "call_001", name: "get_weather", params: { location: "NYC" }), - Action.new(id: "call_002", name: "search_web", params: { query: "NYC restaurants" }) - ] - - assistant_message = Message.new( - role: :assistant, - content: "Getting both pieces of information", - raw_actions: [ - { "id" => "call_001", "type" => "function", "function" => { "name" => "get_weather" } }, - { "id" => "call_002", "type" => "function", "function" => { "name" => "search_web" } } - ] - ) - - @agent.context.messages << assistant_message - @agent.send(:perform_actions, requested_actions: actions) - - tool_messages = @agent.context.messages.select { |m| m.role == :tool } - assert_equal 2, tool_messages.count - assert_equal [ "call_001", "call_002" ], tool_messages.map(&:action_id) - assert_equal [ "get_weather", "search_web" ], tool_messages.map(&:action_name) - end - - test "handle_response preserves message flow for tool calls" do - # Create a mock response with tool calls - mock_response = Struct.new(:message, :prompt).new - mock_response.message = Message.new( - role: :assistant, - content: "I'll calculate that for you", - action_requested: true, - requested_actions: [ - Action.new(id: "calc_001", name: "calculate", params: { expression: "2 + 2" }) - ], - raw_actions: [ { - "id" => "calc_001", - "type" => "function", - "function" => { "name" => "calculate", "arguments" => '{"expression": "2 + 2"}' } - } ] - ) - - # Mock the generation provider - mock_provider = Minitest::Mock.new - mock_provider.expect(:generate, nil, [ @agent.context ]) - mock_provider.expect(:response, mock_response) - - @agent.instance_variable_set(:@generation_provider, mock_provider) - - # Simulate update_context adding the assistant message - @agent.context.messages << mock_response.message - - # Count messages before handle_response - initial_message_count = @agent.context.messages.count - - # Call handle_response (without continue_generation to avoid needing full provider setup) - @agent.stub(:continue_generation, mock_response) do - result = @agent.send(:handle_response, mock_response) - - # Should have added tool message(s) for the action - # Note: with the fix, the action's prompt call now properly renders and adds messages - assert @agent.context.messages.count > initial_message_count - - # Last message should be the tool response - last_message = @agent.context.messages.last - assert_equal :tool, last_message.role - assert_equal "calc_001", last_message.action_id - end - end - - test "tool message does not overwrite assistant message" do - assistant_message = Message.new( - role: :assistant, - content: "Original assistant message", - action_requested: true, - requested_actions: [ - Action.new(id: "test_001", name: "search_web", params: { query: "test" }) - ] - ) - - # Store reference to original assistant message - @agent.context.messages << assistant_message - original_assistant = @agent.context.messages.last - - # Perform action - @agent.send(:perform_action, assistant_message.requested_actions.first) - - # Find the assistant message again - assistant_in_context = @agent.context.messages.find { |m| m.role == :assistant } - - # Verify it's still the same message with same content - assert_equal original_assistant.object_id, assistant_in_context.object_id - assert_equal "Original assistant message", assistant_in_context.content - assert_equal :assistant, assistant_in_context.role - end - - test "context cloning in perform_action preserves messages" do - # Add initial messages - initial_messages = @agent.context.messages.dup - - action = Action.new( - id: "test_clone", - name: "search_web", - params: { query: "cloning test" } - ) - - @agent.send(:perform_action, action) - - # After perform_action, we expect: - # - Original system message preserved - # - Original user message preserved - # - New tool message added - - system_messages = @agent.context.messages.select { |m| m.role == :system } - user_messages = @agent.context.messages.select { |m| m.role == :user } - tool_messages = @agent.context.messages.select { |m| m.role == :tool } - - # The system messages may be modified during prompt flow - # What matters is we have system messages and the user message is preserved - assert system_messages.any?, "Should have system messages" - assert_equal 1, user_messages.count, "Should have one user message" - assert_equal "What's the weather in NYC and search for restaurants there?", user_messages.first.content - assert_equal 1, tool_messages.count, "Should have one tool message" - assert_equal "Found 10 results for cloning test", tool_messages.first.content - end - end - end -end diff --git a/test/action_prompt/prompt_test.rb b/test/action_prompt/prompt_test.rb deleted file mode 100644 index 96facfc0..00000000 --- a/test/action_prompt/prompt_test.rb +++ /dev/null @@ -1,256 +0,0 @@ -require "test_helper" - -module ActiveAgent - module ActionPrompt - class PromptTest < ActiveSupport::TestCase - test "initializes with default attributes" do - prompt = Prompt.new - - assert_equal({}, prompt.options) - assert_equal ApplicationAgent, prompt.agent_class - assert_equal [], prompt.actions - assert_equal "", prompt.action_choice - assert_equal "", prompt.instructions - assert_equal "", prompt.body - assert_equal "text/plain", prompt.content_type - assert_nil prompt.message - # Should have one system message with empty instructions - assert_equal 1, prompt.messages.size - assert_equal :system, prompt.messages[0].role - assert_equal "", prompt.messages[0].content - assert_equal({}, prompt.params) - assert_equal "1.0", prompt.mime_version - assert_equal "UTF-8", prompt.charset - assert_equal [], prompt.context - assert_nil prompt.context_id - assert_equal({}, prompt.instance_variable_get(:@headers)) - assert_equal [], prompt.parts - end - - test "initializes with custom attributes" do - attributes = { - options: { key: "value" }, - agent_class: ApplicationAgent, - actions: [ "action1" ], - action_choice: "action1", - instructions: "Test instructions", - body: "Test body", - content_type: "application/json", - message: "Test message", - messages: [ Message.new(content: "Existing message") ], - params: { param1: "value1" }, - mime_version: "2.0", - charset: "ISO-8859-1", - context: [ "context1" ], - context_id: "123", - headers: { "Header-Key" => "Header-Value" }, - parts: [ "part1" ] - } - - prompt = Prompt.new(attributes) - - assert_equal attributes[:options], prompt.options - assert_equal attributes[:agent_class], prompt.agent_class - assert_equal attributes[:actions], prompt.actions - assert_equal attributes[:action_choice], prompt.action_choice - assert_equal attributes[:instructions], prompt.instructions - assert_equal attributes[:body], prompt.body - assert_equal attributes[:content_type], prompt.content_type - assert_equal attributes[:message], prompt.message.content - assert_equal ([ Message.new(content: "Test instructions", role: :system) ] + attributes[:messages] + [ Message.new(content: attributes[:message], role: :user) ]).map(&:to_h), prompt.messages.map(&:to_h) - assert_equal attributes[:params], prompt.params - assert_equal attributes[:mime_version], prompt.mime_version - assert_equal attributes[:charset], prompt.charset - assert_equal attributes[:context], prompt.context - assert_equal attributes[:context_id], prompt.context_id - assert_equal attributes[:headers], prompt.instance_variable_get(:@headers) - assert_equal attributes[:parts], prompt.parts - end - - test "to_s returns message content as string" do - prompt = Prompt.new(message: "Test message") - assert_equal "Test message", prompt.to_s - end - - test "multimodal? returns true if message content is an array" do - prompt = Prompt.new(message: Message.new(content: [ "image1.png", "image2.png" ])) - assert prompt.multimodal? - end - - test "multimodal? returns true if any message content is an array" do - prompt = Prompt.new(messages: [ Message.new(content: "text"), Message.new(content: [ "image1.png", "image2.png" ]) ]) - assert prompt.multimodal? - end - - test "multimodal? handles nil messages gracefully" do - # Test with empty messages array - prompt = Prompt.new(messages: []) - assert_not prompt.multimodal? - - # Test with nil message content but array in messages - prompt_with_nil = Prompt.new(message: nil, messages: [ Message.new(content: [ "image.png" ]) ]) - assert prompt_with_nil.multimodal? - - # Test with only nil message and empty messages - prompt_all_nil = Prompt.new(message: nil, messages: []) - assert_not prompt_all_nil.multimodal? - end - - test "from_messages initializes messages from an array of Message objects" do - prompt = Prompt.new( - messages: [ - { content: "Hello, how can I assist you today?", role: :assistant }, - { content: "I need help with my account.", role: :user } - ] - ) - - # Should have system message plus the two provided messages - assert_equal 3, prompt.messages.size - assert_equal :system, prompt.messages[0].role - assert_equal "", prompt.messages[0].content - assert_equal "Hello, how can I assist you today?", prompt.messages[1].content - assert_equal :assistant, prompt.messages[1].role - assert_equal "I need help with my account.", prompt.messages[2].content - assert_equal :user, prompt.messages[2].role - end - - test "from_messages initializes messages from an array of Message objects with instructions" do - prompt = Prompt.new( - messages: [ - { content: "Hello, how can I assist you today?", role: :assistant }, - { content: "I need help with my account.", role: :user } - ], - instructions: "System instructions" - ) - - assert_equal 3, prompt.messages.size - assert_equal "System instructions", prompt.messages.first.content - assert_equal :system, prompt.messages.first.role - assert_equal "Hello, how can I assist you today?", prompt.messages.second.content - assert_equal :assistant, prompt.messages.second.role - assert_equal "I need help with my account.", prompt.messages.last.content - assert_equal :user, prompt.messages.last.role - end - - test "to_h returns hash representation of prompt" do - instructions = Message.new(content: "Test instructions", role: :system) - message = Message.new(content: "Test message") - prompt = Prompt.new( - actions: [ "action1" ], - action_choice: "action1", - instructions: instructions.content, - message: message, - messages: [], - headers: { "Header-Key" => "Header-Value" }, - context: [ "context1" ] - ) - expected_hash = { - actions: [ "action1" ], - action: "action1", - instructions: instructions.content, - message: message.to_h, - messages: [ instructions.to_h, message.to_h ], - headers: { "Header-Key" => "Header-Value" }, - context: [ "context1" ] - } - - assert_equal expected_hash, prompt.to_h - end - - test "add_part adds a message to parts and updates message" do - message = Message.new(content: "Part message", content_type: "text/plain") - prompt = Prompt.new(content_type: "text/plain") - - prompt.add_part(message) - - assert_equal message, prompt.message - assert_includes prompt.parts, message - end - - test "multipart? returns true if parts are present" do - prompt = Prompt.new - assert_not prompt.multipart? - - prompt.add_part(Message.new(content: "Part message")) - assert prompt.multipart? - end - - test "headers method merges new headers" do - prompt = Prompt.new(headers: { "Existing-Key" => "Existing-Value" }) - prompt.headers("New-Key" => "New-Value") - - expected_headers = { "Existing-Key" => "Existing-Value", "New-Key" => "New-Value" } - assert_equal expected_headers, prompt.instance_variable_get(:@headers) - end - - test "set_messages adds system message if instructions are present" do - prompt = Prompt.new(instructions: "System instructions") - assert_equal 1, prompt.messages.size - assert_equal "System instructions", prompt.messages.first.content - assert_equal :system, prompt.messages.first.role - end - - test "set_message creates a user message from string" do - prompt = Prompt.new(message: "User message") - assert_equal "User message", prompt.message.content - assert_equal :user, prompt.message.role - end - - test "set_message creates a user message from body if message content is blank" do - prompt = Prompt.new(body: "Body content", message: Message.new(content: "")) - assert_equal "Body content", prompt.message.content - assert_equal :user, prompt.message.role - end - - test "instructions setter adds instruction to messages" do - prompt = Prompt.new - prompt.instructions = "System instructions" - assert_equal 1, prompt.messages.size - assert_equal "System instructions", prompt.messages.first.content - assert_equal :system, prompt.messages.first.role - end - - test "instructions setter replace instruction if it already exists in messages" do - prompt = Prompt.new(instructions: "System instructions") - prompt.instructions = "New system instructions" - assert_equal 1, prompt.messages.size - assert_equal "New system instructions", prompt.messages.first.content - assert_equal :system, prompt.messages.first.role - end - - test "instructions setter updates system message even with empty instructions" do - prompt = Prompt.new - # Prompt already has a system message with empty content - assert_equal 1, prompt.messages.size - assert_equal "", prompt.messages[0].content - - # Setting empty instructions should maintain the system message - prompt.instructions = "" - assert_equal 1, prompt.messages.size - assert_equal "", prompt.messages[0].content - end - - test "initializes with actions, message, and messages example" do - # region support_agent_prompt_initialization - prompt = ActiveAgent::ActionPrompt::Prompt.new( - actions: SupportAgent.new.action_schemas, - message: "I need help with my account.", - messages: [ - { content: "Hello, how can I assist you today?", role: :assistant } - ] - ) - # endregion support_agent_prompt_initialization - - assert_equal "get_cat_image", prompt.actions.first["function"]["name"] - assert_equal "I need help with my account.", prompt.message.content - assert_equal :user, prompt.message.role - # Should have system message plus the provided assistant message - assert_equal 3, prompt.messages.size - assert_equal :system, prompt.messages[0].role - assert_equal "", prompt.messages[0].content - assert_equal "Hello, how can I assist you today?", prompt.messages[1].content - assert_equal :assistant, prompt.messages[1].role - end - end - end -end diff --git a/test/action_prompt/response_delegation_test.rb b/test/action_prompt/response_delegation_test.rb deleted file mode 100644 index abdbfeaa..00000000 --- a/test/action_prompt/response_delegation_test.rb +++ /dev/null @@ -1,59 +0,0 @@ -# frozen_string_literal: true - -require "test_helper" - -class ResponseDelegationTest < ActiveSupport::TestCase - class TestAgent < ActiveAgent::Base - def test_action - prompt(message: "Test message") - end - - after_generation :check_response_access - - private - - def check_response_access - # This should work now with delegation - assert response.present? - assert_equal response, generation_provider.response - end - end - - test "agent delegates response to generation_provider" do - agent = TestAgent.new - - # Create a simple test provider that tracks response - test_provider = Class.new do - attr_accessor :response - - def generate(prompt) - @response = ActiveAgent::GenerationProvider::Response.new( - prompt: prompt, - message: ActiveAgent::ActionPrompt::Message.new(content: "Test response", role: :assistant) - ) - end - end.new - - # Replace the generation_provider - agent.stub :generation_provider, test_provider do - # No response before generation - assert_nil agent.response - - # Simulate generation - agent.instance_variable_set(:@context, ActiveAgent::ActionPrompt::Prompt.new) - agent.send(:perform_generation) - - # Now response should be delegated from generation_provider - assert agent.response.present? - assert_equal "Test response", agent.response.message.content - assert_equal test_provider.response, agent.response - end - end - - test "response delegation handles nil generation_provider gracefully" do - agent = TestAgent.new - agent.stub :generation_provider, nil do - assert_nil agent.response - end - end -end diff --git a/test/agents/actions_examples_test.rb b/test/agents/actions_examples_test.rb deleted file mode 100644 index a895eb20..00000000 --- a/test/agents/actions_examples_test.rb +++ /dev/null @@ -1,66 +0,0 @@ -require "test_helper" - -class ActionsExamplesTest < ActiveSupport::TestCase - test "using actions to prompt the agent with a templated message" do - # region actions_prompt_agent_basic - parameterized_agent = TravelAgent.with(message: "I want to find hotels in Paris") - travel_prompt = parameterized_agent.search - - # The search action renders a view with the search results - assert travel_prompt.message.content.include?("Travel Search Results") - # endregion actions_prompt_agent_basic - end - - test "agent uses actions with parameters" do - # region actions_with_parameters - # Pass parameters using the with method - agent = TravelAgent.with( - message: "Book this flight", - flight_id: "AA456", - passenger_name: "Alice Johnson" - ) - - # Access parameters in the action using params - booking_prompt = agent.book - assert booking_prompt.message.content.include?("AA456") - assert booking_prompt.message.content.include?("Alice Johnson") - # endregion actions_with_parameters - end - - test "actions with different content types" do - # region actions_content_types - # HTML content for rich UI - search_result = TravelAgent.with( - departure: "NYC", - destination: "London", - results: [ { airline: "British Airways", price: 599, departure: "9:00 AM" } ] - ).search - assert search_result.message.content.include?("Travel Search Results") - assert search_result.message.content.include?("British Airways") - - # Text content for simple responses - confirm_result = TravelAgent.with( - confirmation_number: "ABC123", - passenger_name: "Test User" - ).confirm - assert confirm_result.message.content.include?("Your booking has been confirmed!") - assert confirm_result.message.content.include?("ABC123") - # endregion actions_content_types - end - - test "using prompt_context for agent-driven generation" do - # region actions_prompt_context_generation - # Use prompt_context when you want the agent to determine actions - agent = TravelAgent.with(message: "I need to book a flight to Paris") - prompt_context = agent.prompt_context - - # The agent will have access to all available actions - assert prompt_context.actions.is_a?(Array) - assert prompt_context.actions.size > 0 - # Actions are available as function schemas - - # Generate a response (in real usage) - # response = prompt_context.generate_now - # endregion actions_prompt_context_generation - end -end diff --git a/test/agents/application_agent_test.rb b/test/agents/application_agent_test.rb deleted file mode 100644 index cc3f6771..00000000 --- a/test/agents/application_agent_test.rb +++ /dev/null @@ -1,69 +0,0 @@ -# test/application_agent_test.rb - additional test for embed functionality - -require "test_helper" - -class ApplicationAgentTest < ActiveSupport::TestCase - test "it renders a prompt with an 'Test' message" do - assert_equal "Test", ApplicationAgent.with(message: "Test").prompt_context.message.content - end - - test "it renders a prompt with an plain text message" do - assert_equal "Test Application Agent", ApplicationAgent.with(message: "Test Application Agent").prompt_context.message.content - end - - test "it renders a prompt with an plain text message and generates a response" do - VCR.use_cassette("application_agent_prompt_context_message_generation") do - test_response_message_content = "It seems like you're referring to a \"Test Application Agent.\" Could you please provide more details about what you need? Are you looking for information on how to create one, its functions, or specific technologies related to application testing? Let me know how I can assist you!" - # region application_agent_prompt_context_message_generation - message = "Test Application Agent" - prompt = ApplicationAgent.with(message: message).prompt_context - response = prompt.generate_now - # endregion application_agent_prompt_context_message_generation - - doc_example_output(response) - assert_equal test_response_message_content, response.message.content - end - end - - test "it renders a prompt with an plain text message with previous messages and generates a response" do - VCR.use_cassette("application_agent_loaded_context_message_generation") do - test_response_message_content = "Sure, I can help with that! Could you please provide me with more details about the issue you're experiencing with your account?" - # region application_agent_loaded_context_message_generation - message = "I need help with my account" - previous_context = ActiveAgent::ActionPrompt::Prompt.new( - messages: [ { content: "Hello, how can I assist you today?", role: :assistant } ], - instructions: "You're an application agent" - ) - response = ApplicationAgent.with(message: message, messages: previous_context.messages).prompt_context.generate_now - # endregion application_agent_loaded_context_message_generation - - doc_example_output(response) - assert_equal test_response_message_content, response.message.content - end - end - - test "embed generates vector for message content" do - VCR.use_cassette("application_agent_message_embedding") do - message = ActiveAgent::ActionPrompt::Message.new(content: "Test content for embedding") - response = message.embed - - assert_not_nil response - assert_equal message, response - # Assuming your provider returns a vector when embed is called - assert_not_nil response.content - end - end - - test "embed can be called directly on an agent instance" do - VCR.use_cassette("application_agent_embeddings") do - agent = ApplicationAgent.new - agent.context = ActiveAgent::ActionPrompt::Prompt.new( - message: ActiveAgent::ActionPrompt::Message.new(content: "Test direct embedding") - ) - response = agent.embed - - assert_not_nil response - assert_instance_of ActiveAgent::GenerationProvider::Response, response - end - end -end diff --git a/test/agents/browser_agent_test.rb b/test/agents/browser_agent_test.rb deleted file mode 100644 index 575b22de..00000000 --- a/test/agents/browser_agent_test.rb +++ /dev/null @@ -1,148 +0,0 @@ -require "test_helper" - -class BrowserAgentTest < ActiveSupport::TestCase - test "browser agent navigates to a URL using prompt_context" do - # Skip if Chrome/Cuprite not available - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_navigate_with_ai") do - # region navigate_example - response = BrowserAgent.with( - message: "Navigate to https://www.example.com and tell me what you see" - ).prompt_context.generate_now - - assert response.message.content.present? - # endregion navigate_example - - doc_example_output(response) - end - end - - test "browser agent uses actions as tools with AI" do - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_with_ai") do - # region ai_browser_example - response = BrowserAgent.with( - message: "Go to https://www.example.com and extract the main heading" - ).prompt_context.generate_now - - # Check that AI used the tools - assert response.prompt.messages.any? { |m| m.role == :tool } - assert response.message.content.present? - # endregion ai_browser_example - - doc_example_output(response) - end - end - - test "browser agent can be used directly without AI" do - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_direct_navigation") do - # region direct_action_example - # Call navigate action directly (synchronous execution) - navigate_response = BrowserAgent.with( - url: "https://www.example.com" - ).navigate - - # The action returns a Generation object - assert_kind_of ActiveAgent::Generation, navigate_response - - # Execute the generation - result = navigate_response.generate_now - - assert result.message.content.include?("navigated") || result.message.content.include?("Failed") || result.message.content.include?("Example") - # endregion direct_action_example - - doc_example_output(result) - end - end - - test "browser agent researches a topic on Wikipedia" do - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_wikipedia_research") do - # region wikipedia_research_example - response = BrowserAgent.with( - message: "Research the Apollo 11 moon landing mission. Start at the main Wikipedia article, then: - 1) Extract the main content to get an overview - 2) Find and follow links to learn about the crew members (Neil Armstrong, Buzz Aldrin, Michael Collins) - 3) Take screenshots of important pages - 4) Extract key dates, mission objectives, and historical significance - 5) Look for related missions or events by exploring relevant links - Please provide a comprehensive summary with details about the mission, crew, and its impact on space exploration.", - url: "https://en.wikipedia.org/wiki/Apollo_11" - ).prompt_context.generate_now - - # The agent should navigate to Wikipedia and gather information - assert response.message.content.present? - assert response.message.content.downcase.include?("apollo") || - response.message.content.downcase.include?("moon") || - response.message.content.downcase.include?("armstrong") || - response.message.content.downcase.include?("nasa") - - # Check that multiple tools were used - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - assert tool_messages.any?, "Should have used tools" - - # Check for variety in tool usage (the agent should use multiple different tools) - assistant_messages = response.prompt.messages.select { |m| m.role == :assistant } - tool_names = [] - assistant_messages.each do |msg| - if msg.requested_actions&.any? - tool_names.concat(msg.requested_actions.map(&:name)) - end - end - tool_names.uniq! - - assert tool_names.length > 2, "Should use at least 3 different tools for comprehensive research" - # endregion wikipedia_research_example - - doc_example_output(response) - end - end - - test "browser agent takes area screenshot" do - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_area_screenshot") do - # region area_screenshot_example - response = BrowserAgent.with( - message: "Navigate to https://www.example.com and take a screenshot of just the header area (top 200 pixels)" - ).prompt_context.generate_now - - assert response.message.content.present? - - # Check that screenshot tool was used - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - assert tool_messages.any? { |m| m.content.include?("screenshot") }, "Should have taken a screenshot" - # endregion area_screenshot_example - - doc_example_output(response) - end - end - - test "browser agent auto-crops main content" do - skip "Cuprite/Chrome not configured for CI" if ENV["CI"] - - VCR.use_cassette("browser_agent_main_content_crop") do - # region main_content_crop_example - response = BrowserAgent.with( - message: "Navigate to Wikipedia's Apollo 11 page and take a screenshot of the main content (should automatically exclude navigation/header)" - ).prompt_context.generate_now - - assert response.message.content.present? - - # Check that screenshot was taken - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - assert tool_messages.any? { |m| m.content.include?("screenshot") }, "Should have taken a screenshot" - - # Check that the agent navigated to Wikipedia - assert tool_messages.any? { |m| m.content.include?("wikipedia") }, "Should have navigated to Wikipedia" - # endregion main_content_crop_example - - doc_example_output(response) - end - end -end diff --git a/test/agents/builtin_tools_doc_test.rb b/test/agents/builtin_tools_doc_test.rb deleted file mode 100644 index a068970e..00000000 --- a/test/agents/builtin_tools_doc_test.rb +++ /dev/null @@ -1,115 +0,0 @@ -require "test_helper" -require_relative "../dummy/app/agents/web_search_agent" -require_relative "../dummy/app/agents/multimodal_agent" - -class BuiltinToolsDocTest < ActiveSupport::TestCase - # region web_search_example - test "web search with responses API example" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("doc_web_search_responses") do - generation = WebSearchAgent.with( - query: "Latest Ruby on Rails 8 features", - context_size: "high" - ).search_with_tools - - result = generation.generate_now - - # The response includes web search results - assert result.message.content.present? - assert result.message.content.include?("Rails") - - doc_example_output(result) - end - end - # endregion web_search_example - - # region image_generation_example - test "image generation with responses API example" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("doc_image_generation") do - generation = MultimodalAgent.with( - description: "A serene landscape with mountains and a lake at sunset", - size: "1024x1024", - quality: "high" - ).create_image - - result = generation.generate_now - - # The response includes the generated image - assert result.message.content.present? - - doc_example_output(result) - end - end - # endregion image_generation_example - - # region combined_tools_example - test "combining multiple built-in tools example" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("doc_combined_tools") do - generation = MultimodalAgent.with( - topic: "Climate Change Impact", - style: "modern" - ).create_infographic - - result = generation.generate_now - - # The response uses both web search and image generation - assert result.message.content.present? - - doc_example_output(result) - end - end - # endregion combined_tools_example - - # region tool_configuration_example - test "tool configuration in prompt options" do - # Example showing how to configure built-in tools - tools_config = [ - { - type: "web_search_preview", - search_context_size: "high", - user_location: { - country: "US", - city: "San Francisco" - } - }, - { - type: "image_generation", - size: "1024x1024", - quality: "high", - format: "png" - }, - { - type: "mcp", - server_label: "GitHub", - server_url: "https://api.githubcopilot.com/mcp/", - require_approval: "never" - } - ] - - # Show how the options would be passed to prompt - example_options = { - use_responses_api: true, - model: "gpt-5", - tools: tools_config - } - - # Verify the configuration structure - assert example_options[:tools].is_a?(Array) - assert_equal 3, example_options[:tools].length - assert_equal "web_search_preview", example_options[:tools][0][:type] - assert_equal "image_generation", example_options[:tools][1][:type] - assert_equal "mcp", example_options[:tools][2][:type] - - doc_example_output({ - description: "Example configuration for built-in tools in prompt options", - options: example_options, - tools_configured: tools_config - }) - end - # endregion tool_configuration_example -end diff --git a/test/agents/callback_agent_test.rb b/test/agents/callback_agent_test.rb deleted file mode 100644 index 2ba28a42..00000000 --- a/test/agents/callback_agent_test.rb +++ /dev/null @@ -1,54 +0,0 @@ -require "test_helper" - -class CallbackAgentTest < ActiveSupport::TestCase - # Create a test agent with callbacks for documentation - class TestCallbackAgent < ApplicationAgent - attr_accessor :context_set, :response_processed - - # region callback_agent_before_action - before_action :set_context - - private - def set_context - # Logic to set the context for the action - @context_set = true - prompt_context.instructions = "Context has been set" - end - # endregion callback_agent_before_action - end - - class TestGenerationCallbackAgent < ApplicationAgent - attr_accessor :response_data - - # region callback_agent_after_generation - after_generation :process_response - - private - def process_response - # Access the generation provider response - @response_data = generation_provider.response - end - # endregion callback_agent_after_generation - end - - test "before_action callback is executed before prompt generation" do - agent = TestCallbackAgent.new - agent.params = { message: "Test" } - - # Process the agent to trigger callbacks - agent.process(:prompt_context) - - assert agent.context_set, "before_action callback should set context" - end - - test "after_generation callback is executed after response generation" do - VCR.use_cassette("callback_agent_after_generation") do - response = TestGenerationCallbackAgent.with(message: "Test callback").prompt_context.generate_now - - # The after_generation callback should have access to the response - # This demonstrates the callback pattern even though we can't directly test it - assert_not_nil response - assert_not_nil response.message.content - end - end -end diff --git a/test/agents/concern_tools_test.rb b/test/agents/concern_tools_test.rb deleted file mode 100644 index a0a5ec7e..00000000 --- a/test/agents/concern_tools_test.rb +++ /dev/null @@ -1,164 +0,0 @@ -require "test_helper" -require_relative "../dummy/app/agents/research_agent" -require_relative "../dummy/app/agents/concerns/research_tools" - -class ConcernToolsTest < ActiveSupport::TestCase - setup do - @agent = ResearchAgent.new - end - - test "research agent includes concern actions as available tools" do - # The concern adds these actions which should be available as tools - expected_actions = [ - "search_academic_papers", - "analyze_research_data", - "generate_research_visualization", - "search_with_mcp_sources" - ] - - agent_actions = @agent.action_methods - expected_actions.each do |action| - assert_includes agent_actions, action, "Expected #{action} to be available from concern" - end - end - - test "concern can add built-in tools for responses API" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("concern_web_search_responses_api") do - # When using responses API with multimodal content - # Use the search_academic_papers action from the concern - generation = ResearchAgent.with( - query: "latest research on large language models", - year_from: 2024, - year_to: 2025, - field: "AI" - ).search_academic_papers - - response = generation.generate_now - - assert response.message.content.present? - end - end - - test "concern can configure web search for chat completions API" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("concern_web_search_chat_api") do - # When using chat API with web search model - # Use the comprehensive_research action which builds tools dynamically - generation = ResearchAgent.with( - topic: "latest research on large language models", - depth: "detailed" - ).comprehensive_research - - response = generation.generate_now - - assert response.message.content.present? - end - end - - test "concern supports MCP tools only in responses API" do - skip "Requires API credentials" unless has_openai_credentials? - - VCR.use_cassette("concern_mcp_tools") do - # MCP is only supported in Responses API - # Use the search_with_mcp_sources action from the concern - generation = ResearchAgent.with( - query: "Ruby on Rails best practices", - sources: [ "github" ] - ).search_with_mcp_sources - - response = generation.generate_now - - assert response.message.content.present? - end - end - - test "concern actions work with both chat and responses API" do - # Test that the same action can work with different APIs - - # Test with Chat Completions API (function calling) - chat_prompt = ActiveAgent::ActionPrompt::Prompt.new - chat_prompt.options = { model: "gpt-4o" } - chat_prompt.actions = @agent.action_schemas # Function schemas - - assert chat_prompt.actions.any? { |a| a["function"]["name"] == "search_academic_papers" } - - # Test with Responses API (can use built-in tools) - responses_prompt = ActiveAgent::ActionPrompt::Prompt.new - responses_prompt.options = { - model: "gpt-5", - use_responses_api: true, - tools: [ - { type: "web_search_preview" }, - { type: "image_generation" } - ] - } - - # Should have both function tools and built-in tools - assert responses_prompt.options[:tools].any? { |t| t[:type] == "web_search_preview" } - assert responses_prompt.options[:tools].any? { |t| t[:type] == "image_generation" } - end - - test "concern can dynamically configure tools based on context" do - # The concern can decide which tools to include based on parameters - - # The ResearchAgent.comprehensive_research method should configure tools dynamically - # based on the depth parameter. We verify this by checking that the agent - # has the method and that it accepts the expected parameters. - - agent = ResearchAgent.new - assert agent.respond_to?(:comprehensive_research) - - # Verify the agent has access to the tool configuration methods - assert ResearchAgent.research_tools_config[:enable_web_search] - assert ResearchAgent.research_tools_config[:mcp_servers].present? - end - - test "concern configuration is inherited at class level" do - # ResearchAgent configured with specific settings - assert ResearchAgent.research_tools_config[:enable_web_search] - assert_equal [ "arxiv", "github" ], ResearchAgent.research_tools_config[:mcp_servers] - assert_equal "high", ResearchAgent.research_tools_config[:default_search_context] - end - - test "multiple concerns can add different tool types" do - # Create an agent with multiple concerns - class MultiToolAgent < ApplicationAgent - include ResearchTools - # Could include other tool concerns like ImageTools, DataTools, etc. - - generate_with :openai, model: "gpt-4o" - end - - agent = MultiToolAgent.new - - # Should have all actions from all concerns - assert agent.respond_to?(:search_academic_papers) - assert agent.respond_to?(:analyze_research_data) - assert agent.respond_to?(:generate_research_visualization) - end - - test "concern tools respect API limitations" do - # Test that we don't try to use unsupported features - - # MCP should not be available in Chat API - chat_prompt = ActiveAgent::ActionPrompt::Prompt.new - chat_prompt.options = { - model: "gpt-4o", # Regular chat model - tools: [ - { type: "mcp", server_url: "https://example.com" } # This won't work - ] - } - - # Provider should filter out MCP for chat API - provider = ActiveAgent::GenerationProvider::OpenAIProvider.new({ "model" => "gpt-4o" }) - provider.instance_variable_set(:@prompt, chat_prompt) - - # When using chat API, MCP tools should not be included - # This test verifies that the configuration is set up correctly - assert chat_prompt.options[:model] == "gpt-4o" - assert chat_prompt.options[:tools].any? { |t| t[:type] == "mcp" } - end -end diff --git a/test/agents/configuration_precedence_test.rb b/test/agents/configuration_precedence_test.rb deleted file mode 100644 index 485022c6..00000000 --- a/test/agents/configuration_precedence_test.rb +++ /dev/null @@ -1,266 +0,0 @@ -require "test_helper" -require "active_agent/generation_provider/open_router_provider" - -class ConfigurationPrecedenceTest < ActiveSupport::TestCase - # region test_configuration_precedence - test "validates configuration precedence: runtime > agent > config" do - # Step 1: Set up config-level options (lowest priority) - # This would normally be in config/active_agent.yml - config_options = { - "service" => "OpenRouter", - "model" => "config-model", - "temperature" => 0.1, - "max_tokens" => 100, - "data_collection" => "allow" - } - - # Create a mock provider that exposes its config for testing - mock_provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new(config_options) - - # Step 2: Create agent with generate_with options (medium priority) - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "agent-model", - temperature: 0.5, - data_collection: "deny" - # Note: max_tokens not specified here, should fall back to config - end - - agent = agent_class.new - - # Step 3: Call prompt with runtime options (highest priority) - prompt_context = agent.prompt( - message: "test", - options: { - temperature: 0.9, # Override both agent and config - max_tokens: 500 # Override config (agent didn't specify) - # Note: model not specified, should use agent-model - # Note: data_collection not specified, should use deny from agent - } - ) - - # Verify the merged options follow correct precedence - merged_options = prompt_context.options - - # Runtime options win when specified - assert_equal 0.9, merged_options[:temperature], "Runtime temperature should override agent and config" - assert_equal 500, merged_options[:max_tokens], "Runtime max_tokens should override config" - - # Agent options win over config when runtime not specified - assert_equal "agent-model", merged_options[:model], "Agent model should override config when runtime not specified" - assert_equal "deny", merged_options[:data_collection], "Agent data_collection should override config when runtime not specified" - end - # endregion test_configuration_precedence - - # region runtime_options_override - test "runtime options override everything" do - # Create agent with all levels configured - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "gpt-4", - temperature: 0.5, - max_tokens: 1000, - data_collection: "deny" - end - - agent = agent_class.new - - # Runtime options should override everything - prompt_context = agent.prompt( - message: "test", - options: { - model: "runtime-model", - temperature: 0.99, - max_tokens: 2000, - data_collection: [ "OpenAI", "Google" ] - } - ) - - options = prompt_context.options - assert_equal "runtime-model", options[:model] - assert_equal 0.99, options[:temperature] - assert_equal 2000, options[:max_tokens] - assert_equal [ "OpenAI", "Google" ], options[:data_collection] - end - # endregion runtime_options_override - - # region agent_overrides_config - test "agent options override config options" do - # Create agent with generate_with options - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "agent-override-model", - temperature: 0.7 - end - - agent = agent_class.new - - # Call prompt without runtime options - prompt_context = agent.prompt(message: "test") - - options = prompt_context.options - assert_equal "agent-override-model", options[:model] - assert_equal 0.7, options[:temperature] - end - # endregion agent_overrides_config - - test "config options are used as fallback" do - # Create a basic agent that inherits from ActiveAgent::Base instead of ApplicationAgent - # to avoid getting ApplicationAgent's default model - agent_class = Class.new(ActiveAgent::Base) do - generate_with :open_router - end - - agent = agent_class.new - provider = agent.send(:generation_provider) - - # Get the config values - config = provider.instance_variable_get(:@config) - - # The test config should have model = "qwen/qwen3-30b-a3b:free" - assert_equal "qwen/qwen3-30b-a3b:free", config["model"], "Config should have the test model" - - # Call prompt without any overrides - prompt_context = agent.prompt(message: "test") - - # Get config_options from the provider to verify they're loaded - config_options = provider.config - - # Should fall back to config values - but options might not directly reflect config - # because merge_options filters what gets included - options = prompt_context.options - - # Since no agent-level or runtime model is specified, we should see the config model - # However, the actual behavior may vary based on how options are merged - # Document the actual behavior - if options[:model] - assert_includes [ "qwen/qwen3-30b-a3b:free", nil ], options[:model] - end - end - - # region nil_values_dont_override - test "nil runtime values don't override" do - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "agent-model", - temperature: 0.5 - end - - agent = agent_class.new - - # Pass nil values in runtime options - prompt_context = agent.prompt( - message: "test", - options: { - model: nil, - temperature: nil, - max_tokens: 999 # Non-nil value should work - } - ) - - options = prompt_context.options - - # Nil values should not override - assert_equal "agent-model", options[:model] - assert_equal 0.5, options[:temperature] - - # Non-nil value should override - assert_equal 999, options[:max_tokens] - end - # endregion nil_values_dont_override - - test "explicit options parameter in prompt" do - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "agent-model", - temperature: 0.5 - end - - agent = agent_class.new - - # Test with explicit options parameter - prompt_context = agent.prompt( - message: "test", - options: { - options: { - custom_param: "custom_value" - }, - temperature: 0.8 # This is a runtime option - } - ) - - options = prompt_context.options - - # Runtime option should work - assert_equal 0.8, options[:temperature] - - # Custom param from explicit options should be included - assert_equal "custom_value", options[:custom_param] - end - - # region test_data_collection_precedence - test "data_collection follows precedence rules" do - # 1. Config level (lowest priority) - config_with_allow = { - "service" => "OpenRouter", - "model" => "openai/gpt-4o", - "data_collection" => "allow" - } - - # 2. Agent level with generate_with (medium priority) - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o", - data_collection: "deny" # Override config - end - - agent = agent_class.new - provider = agent.send(:generation_provider) - - # Test without runtime override - should use agent level "deny" - prompt_without_runtime = agent.prompt(message: "test") - provider.instance_variable_set(:@prompt, prompt_without_runtime) - prefs = provider.send(:build_provider_preferences) - assert_equal "deny", prefs[:data_collection], "Agent-level data_collection should override config" - - # 3. Runtime level (highest priority) - prompt_with_runtime = agent.prompt( - message: "test", - options: { - data_collection: [ "OpenAI" ] # Override both agent and config - } - ) - provider.instance_variable_set(:@prompt, prompt_with_runtime) - prefs = provider.send(:build_provider_preferences) - assert_equal [ "OpenAI" ], prefs[:data_collection], "Runtime data_collection should override everything" - end - # endregion test_data_collection_precedence - - test "parent class options are inherited" do - # Create a parent agent with some options - parent_class = Class.new(ActiveAgent::Base) do - generate_with :open_router, - model: "parent-model", - temperature: 0.3 - end - - # Create child agent that overrides some options - child_class = Class.new(parent_class) do - generate_with :open_router, - temperature: 0.6 # Override parent - # model not specified, should inherit from parent - end - - agent = child_class.new - - # The child's options should include parent options - prompt_context = agent.prompt(message: "test") - options = prompt_context.options - - # Child override should win - assert_equal 0.6, options[:temperature] - - # Parent model might be inherited depending on implementation - # This test documents the actual behavior - end -end diff --git a/test/agents/data_collection_override_test.rb b/test/agents/data_collection_override_test.rb deleted file mode 100644 index 60ec9892..00000000 --- a/test/agents/data_collection_override_test.rb +++ /dev/null @@ -1,77 +0,0 @@ -require "test_helper" - -class DataCollectionOverrideTest < ActiveSupport::TestCase - test "runtime data_collection overrides configuration" do - # Create an agent with default "allow" configuration - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o-mini" - end - - agent = agent_class.new - provider = agent.send(:generation_provider) - - # Verify it's an OpenRouter provider - assert_kind_of ActiveAgent::GenerationProvider::OpenRouterProvider, provider - - # Create a prompt with runtime override to "deny" - prompt_context = agent.prompt( - message: "test message", - options: { data_collection: "deny" } - ) - - # Set the prompt on the provider - provider.instance_variable_set(:@prompt, prompt_context) - - # Verify runtime override takes precedence - prefs = provider.send(:build_provider_preferences) - assert_equal "deny", prefs[:data_collection] - end - - test "runtime data_collection with selective providers" do - # Create an agent with "deny" configuration - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o-mini", - data_collection: "deny" - end - - agent = agent_class.new - provider = agent.send(:generation_provider) - - # Create a prompt with runtime override to selective providers - prompt_context = agent.prompt( - message: "test message", - options: { data_collection: [ "OpenAI", "Google" ] } - ) - - # Set the prompt on the provider - provider.instance_variable_set(:@prompt, prompt_context) - - # Verify runtime override with array of providers - prefs = provider.send(:build_provider_preferences) - assert_equal [ "OpenAI", "Google" ], prefs[:data_collection] - end - - test "no runtime override uses configured value" do - # Create an agent with "deny" configuration - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o-mini", - data_collection: "deny" - end - - agent = agent_class.new - provider = agent.send(:generation_provider) - - # Create a prompt without data_collection override - prompt_context = agent.prompt(message: "test message") - - # Set the prompt on the provider - provider.instance_variable_set(:@prompt, prompt_context) - - # Verify configured value is used - prefs = provider.send(:build_provider_preferences) - assert_equal "deny", prefs[:data_collection] - end -end diff --git a/test/agents/data_extraction_agent_test.rb b/test/agents/data_extraction_agent_test.rb deleted file mode 100644 index 8bc1e8b0..00000000 --- a/test/agents/data_extraction_agent_test.rb +++ /dev/null @@ -1,163 +0,0 @@ -require "test_helper" - -class DataExtractionAgentTest < ActiveSupport::TestCase - test "describe_cat_image creates a multimodal prompt with image and text content" do - prompt = nil - VCR.use_cassette("data_extraction_agent_describe_cat_image") do - # region data_extraction_agent_describe_cat_image - prompt = DataExtractionAgent.describe_cat_image - # endregion data_extraction_agent_describe_cat_image - - assert_equal "multipart/mixed", prompt.content_type - assert prompt.multimodal? - assert prompt.message.content.is_a?(Array) - assert_equal 2, prompt.message.content.size - end - - VCR.use_cassette("data_extraction_agent_describe_cat_image_generation_response") do - # region data_extraction_agent_describe_cat_image_response - response = prompt.generate_now - # endregion data_extraction_agent_describe_cat_image_response - doc_example_output(response) - expected_response = "The cat in the image appears to have a primarily dark gray coat with a white patch on its chest. It has a curious expression and is positioned in a relaxed manner. The background suggests a cozy indoor environment, possibly with soft bedding and other household items visible." - assert_equal expected_response, response.message.content - end - end - - test "parse_resume creates a multimodal prompt with file data" do - prompt = nil - VCR.use_cassette("data_extraction_agent_parse_resume") do - sample_resume_path = Rails.root.join("..", "..", "test", "fixtures", "files", "sample_resume.pdf") - # region data_extraction_agent_parse_resume - prompt = DataExtractionAgent.with( - output_schema: :resume_schema, - file_path: sample_resume_path - ).parse_content - # endregion data_extraction_agent_parse_resume - - assert_equal "multipart/mixed", prompt.content_type - assert prompt.multimodal? - assert prompt.message.content.is_a?(Array) - assert_equal 2, prompt.message.content.size - end - - VCR.use_cassette("data_extraction_agent_parse_resume_generation_response") do - response = prompt.generate_now - doc_example_output(response) - - # When output_schema IS present (:resume_schema), content is auto-parsed - assert response.message.content.is_a?(Hash) - assert response.message.content["name"].include?("John Doe") - assert response.message.content["experience"].any? { |exp| exp["job_title"].include?("Software Engineer") } - end - end - - test "parse_resume creates a multimodal prompt with file data with structured output schema" do - prompt = nil - VCR.use_cassette("data_extraction_agent_parse_resume_with_structured_output") do - # region data_extraction_agent_parse_resume_with_structured_output - prompt = DataExtractionAgent.with( - output_schema: :resume_schema, - file_path: Rails.root.join("..", "..", "test", "fixtures", "files", "sample_resume.pdf") - ).parse_content - # endregion data_extraction_agent_parse_resume_with_structured_output - - assert_equal "multipart/mixed", prompt.content_type - assert prompt.multimodal?, "Prompt should be multimodal with file data" - assert prompt.message.content.is_a?(Array), "Prompt message content should be an array for multimodal support" - assert_equal 2, prompt.message.content.size - end - - VCR.use_cassette("data_extraction_agent_parse_resume_generation_response_with_structured_output") do - # region data_extraction_agent_parse_resume_with_structured_output_response - response = prompt.generate_now - # endregion data_extraction_agent_parse_resume_with_structured_output_response - # region data_extraction_agent_parse_resume_with_structured_output_json - # When output_schema is present, content is already parsed - json_response = response.message.content - # endregion data_extraction_agent_parse_resume_with_structured_output_json - doc_example_output(response) - doc_example_output(json_response, "parse-resume-json-response") - - assert_equal "application/json", response.message.content_type - assert_equal "resume_schema", response.prompt.output_schema["format"]["name"] - assert_equal json_response["name"], "John Doe" - assert_equal json_response["email"], "john.doe@example.com" - # Verify raw_content contains the JSON string - assert_equal response.message.raw_content, "{\"name\":\"John Doe\",\"email\":\"john.doe@example.com\",\"phone\":\"(555) 123-4567\",\"education\":[{\"degree\":\"BS Computer Science\",\"institution\":\"Stanford University\",\"year\":2020}],\"experience\":[{\"job_title\":\"Senior Software Engineer\",\"company\":\"TechCorp\",\"duration\":\"2020-2024\"}]}" - # Verify parsed content - assert json_response["name"].include?("John Doe") - assert json_response["experience"].any? { |exp| exp["job_title"].include?("Software Engineer") } - end - end - - test "parse_chart content from image data" do - prompt = nil - VCR.use_cassette("data_extraction_agent_parse_chart") do - sales_chart_path = Rails.root.join("..", "..", "test", "fixtures", "images", "sales_chart.png") - # region data_extraction_agent_parse_chart - prompt = DataExtractionAgent.with( - image_path: sales_chart_path - ).parse_content - # endregion data_extraction_agent_parse_chart - - assert_equal "multipart/mixed", prompt.content_type - assert prompt.multimodal?, "Prompt should be multimodal with image data" - assert prompt.message.content.is_a?(Array) - assert_equal 2, prompt.message.content.size - end - - VCR.use_cassette("data_extraction_agent_parse_chart_generation_response") do - response = prompt.generate_now - doc_example_output(response) - expected_response = "The image is a bar chart titled \"Quarterly Sales Report\" that displays sales revenue for the year 2024 by quarter. \n\n- **Y-axis** represents sales revenue in thousands of dollars, ranging from $0 to $100,000.\n- **X-axis** lists the four quarters: Q1, Q2, Q3, and Q4.\n\nThe bars are colored as follows:\n- Q1: Blue\n- Q2: Green\n- Q3: Yellow\n- Q4: Red\n\nThe heights of the bars indicate the sales revenue for each quarter, with Q4 showing the highest revenue." - assert_equal expected_response, response.message.content - end - end - - test "parse_chart content from image data with structured output schema" do - prompt = nil - VCR.use_cassette("data_extraction_agent_parse_chart_with_structured_output") do - sales_chart_path = Rails.root.join("..", "..", "test", "fixtures", "images", "sales_chart.png") - # region data_extraction_agent_parse_chart_with_structured_output - prompt = DataExtractionAgent.with( - output_schema: :chart_schema, - image_path: sales_chart_path - ).parse_content - # endregion data_extraction_agent_parse_chart_with_structured_output - - assert_equal "multipart/mixed", prompt.content_type - assert prompt.multimodal?, "Prompt should be multimodal with image data" - assert prompt.message.content.is_a?(Array) - assert_equal 2, prompt.message.content.size - end - - VCR.use_cassette("data_extraction_agent_parse_chart_generation_response_with_structured_output") do - # region data_extraction_agent_parse_chart_with_structured_output_response - response = prompt.generate_now - # endregion data_extraction_agent_parse_chart_with_structured_output_response - - # region data_extraction_agent_parse_chart_with_structured_output_json - # When output_schema is present, content is already parsed - json_response = response.message.content - # endregion data_extraction_agent_parse_chart_with_structured_output_json - - doc_example_output(response) - doc_example_output(json_response, "parse-chart-json-response") - assert_equal "application/json", response.message.content_type - - assert_equal "chart_schema", response.prompt.output_schema["format"]["name"] - - assert_equal json_response["title"], "Quarterly Sales Report" - assert json_response["data_points"].is_a?(Array), "Data points should be an array" - assert_equal json_response["data_points"].first["label"], "Q1" - assert_equal json_response["data_points"].first["value"], 25000 - assert_equal json_response["data_points"][1]["label"], "Q2" - assert_equal json_response["data_points"][1]["value"], 50000 - assert_equal json_response["data_points"][2]["label"], "Q3" - assert_equal json_response["data_points"][2]["value"], 75000 - assert_equal json_response["data_points"].last["label"], "Q4" - assert_equal json_response["data_points"].last["value"], 100000 - end - end -end diff --git a/test/agents/embedding_agent_test.rb b/test/agents/embedding_agent_test.rb deleted file mode 100644 index 7ca578d8..00000000 --- a/test/agents/embedding_agent_test.rb +++ /dev/null @@ -1,285 +0,0 @@ -require "test_helper" - -class EmbeddingAgentTest < ActiveSupport::TestCase - # region embedding_sync_generation - test "generates embeddings synchronously with embed_now" do - VCR.use_cassette("embedding_agent_sync") do - # Create a generation for embedding - generation = ApplicationAgent.with( - message: "The quick brown fox jumps over the lazy dog" - ).prompt_context - - # Generate embedding synchronously - response = generation.embed_now - - # Extract embedding vector - embedding_vector = response.message.content - - assert_kind_of Array, embedding_vector - assert embedding_vector.all? { |v| v.is_a?(Float) } - assert_includes [ 1536, 3072 ], embedding_vector.size # OpenAI dimensions vary by model - - # Document the example - doc_example_output(response) - - embedding_vector - end - end - # endregion embedding_sync_generation - - # region embedding_async_generation - test "generates embeddings asynchronously with embed_later" do - # Create a generation for async embedding - generation = ApplicationAgent.with( - message: "Artificial intelligence is transforming technology" - ).prompt_context - - # Mock the enqueue_generation private method - generation.instance_eval do - def enqueue_generation(method, options = {}) - @enqueue_called = true - @enqueue_method = method - @enqueue_options = options - true - end - - def enqueue_called? - @enqueue_called - end - - def enqueue_method - @enqueue_method - end - - def enqueue_options - @enqueue_options - end - end - - # Queue embedding for background processing - result = generation.embed_later( - priority: :low, - queue: :embeddings - ) - - assert result - assert generation.enqueue_called? - assert_equal :embed_now, generation.enqueue_method - assert_equal({ priority: :low, queue: :embeddings }, generation.enqueue_options) - end - # endregion embedding_async_generation - - # region embedding_with_callbacks - test "processes embeddings with callbacks" do - VCR.use_cassette("embedding_agent_callbacks") do - # Create a custom agent with embedding callbacks - custom_agent_class = Class.new(ApplicationAgent) do - attr_accessor :before_embedding_called, :after_embedding_called - - before_embedding :track_before - after_embedding :track_after - - def track_before - self.before_embedding_called = true - end - - def track_after - self.after_embedding_called = true - end - end - - # Generate embedding with callbacks - generation = custom_agent_class.with( - message: "Testing embedding callbacks" - ).prompt_context - - agent = generation.send(:processed_agent) - response = generation.embed_now - - assert agent.before_embedding_called - assert agent.after_embedding_called - assert_not_nil response.message.content - - doc_example_output(response) - end - end - # endregion embedding_with_callbacks - - # region embedding_similarity_search - test "performs similarity search with embeddings" do - VCR.use_cassette("embedding_similarity_search") do - documents = [ - "The cat sat on the mat", - "Dogs are loyal companions", - "Machine learning is a subset of AI", - "The feline rested on the rug" - ] - - # Generate embeddings for all documents - embeddings = documents.map do |doc| - generation = ApplicationAgent.with(message: doc).prompt_context - generation.embed_now.message.content - end - - # Query embedding - query = "cat on mat" - query_generation = ApplicationAgent.with(message: query).prompt_context - query_embedding = query_generation.embed_now.message.content - - # Calculate cosine similarities - similarities = embeddings.map.with_index do |embedding, index| - similarity = cosine_similarity(query_embedding, embedding) - { document: documents[index], similarity: similarity } - end - - # Sort by similarity - results = similarities.sort_by { |s| -s[:similarity] } - - # Most similar should be the cat/mat documents - assert_equal "The cat sat on the mat", results.first[:document] - assert results.first[:similarity] > 0.5, "Similarity should be > 0.5, got #{results.first[:similarity]}" - - # Document the results - doc_example_output(results.first(2)) - end - end - # endregion embedding_similarity_search - - # region embedding_dimension_test - test "verifies embedding dimensions for different models" do - VCR.use_cassette("embedding_dimensions") do - # Test with default model (usually text-embedding-3-small or ada-002) - generation = ApplicationAgent.with( - message: "Testing embedding dimensions" - ).prompt_context - - response = generation.embed_now - embedding = response.message.content - - # Most OpenAI models return 1536 dimensions by default - assert_includes [ 1536, 3072 ], embedding.size - - doc_example_output({ - model: "default", - dimensions: embedding.size, - sample: embedding[0..4] - }) - end - end - # endregion embedding_dimension_test - - # region embedding_openai_model_config - test "uses configured OpenAI embedding model" do - VCR.use_cassette("embedding_openai_model") do - # Create agent with specific OpenAI model configuration - custom_agent_class = Class.new(ApplicationAgent) do - generate_with :openai, - model: "gpt-4o", - embedding_model: "text-embedding-3-small" - end - - generation = custom_agent_class.with( - message: "Testing OpenAI embedding model configuration" - ).prompt_context - - response = generation.embed_now - embedding = response.message.content - - # text-embedding-3-small can have different dimensions depending on truncation - assert_includes [ 1536, 3072 ], embedding.size - assert embedding.all? { |v| v.is_a?(Float) } - - doc_example_output({ - model: "text-embedding-3-small", - dimensions: embedding.size, - sample: embedding[0..2] - }) - end - end - # endregion embedding_openai_model_config - - # region embedding_ollama_provider_test - test "generates embeddings with Ollama provider" do - VCR.use_cassette("embedding_ollama_provider") do - # Create agent configured for Ollama - ollama_agent_class = Class.new(ApplicationAgent) do - generate_with :ollama, - model: "llama3", - embedding_model: "nomic-embed-text", - host: "http://localhost:11434" - end - - generation = ollama_agent_class.with( - message: "Testing Ollama embedding generation" - ).prompt_context - - begin - response = generation.embed_now - embedding = response.message.content - - assert_kind_of Array, embedding - assert embedding.all? { |v| v.is_a?(Numeric) } - assert embedding.size > 0 - - doc_example_output({ - provider: "ollama", - model: "nomic-embed-text", - dimensions: embedding.size, - sample: embedding[0..2] - }) - rescue Errno::ECONNREFUSED, Net::OpenTimeout => e - # Document the expected error when Ollama is not running - doc_example_output({ - error: "Connection refused", - message: "Ollama is not running locally", - solution: "Start Ollama with: ollama serve" - }) - skip "Ollama is not running locally: #{e.message}" - end - end - end - # endregion embedding_ollama_provider_test - - # region embedding_batch_processing - test "processes multiple embeddings in batch" do - VCR.use_cassette("embedding_batch_processing") do - texts = [ - "First document for embedding", - "Second document with different content", - "Third document about technology" - ] - - embeddings = [] - texts.each do |text| - generation = ApplicationAgent.with(message: text).prompt_context - embedding = generation.embed_now.message.content - embeddings << { - text: text[0..20] + "...", - dimensions: embedding.size, - sample: embedding[0..2] - } - end - - assert_equal 3, embeddings.size - embeddings.each do |result| - assert result[:dimensions] > 0 - assert result[:sample].all? { |v| v.is_a?(Float) } - end - - doc_example_output(embeddings) - end - end - # endregion embedding_batch_processing - - private - - def cosine_similarity(vec1, vec2) - dot_product = vec1.zip(vec2).map { |a, b| a * b }.sum - magnitude1 = Math.sqrt(vec1.map { |v| v**2 }.sum) - magnitude2 = Math.sqrt(vec2.map { |v| v**2 }.sum) - - return 0.0 if magnitude1 == 0 || magnitude2 == 0 - - dot_product / (magnitude1 * magnitude2) - end -end diff --git a/test/agents/messages_examples_test.rb b/test/agents/messages_examples_test.rb deleted file mode 100644 index d2eb0b5c..00000000 --- a/test/agents/messages_examples_test.rb +++ /dev/null @@ -1,100 +0,0 @@ -require "test_helper" -require "active_agent/action_prompt/message" -require "active_agent/action_prompt/action" - -class MessagesExamplesTest < ActiveSupport::TestCase - test "message structure and roles" do - # region messages_structure - # Create messages with different roles - system_message = ActiveAgent::ActionPrompt::Message.new( - role: :system, - content: "You are a helpful travel agent." - ) - - user_message = ActiveAgent::ActionPrompt::Message.new( - role: :user, - content: "I need to book a flight to Tokyo" - ) - - assistant_message = ActiveAgent::ActionPrompt::Message.new( - role: :assistant, - content: "I'll help you find flights to Tokyo. Let me search for available options." - ) - - # Messages have roles and content - assert_equal :system, system_message.role - assert_equal :user, user_message.role - assert_equal :assistant, assistant_message.role - # endregion messages_structure - end - - test "messages with requested actions" do - # region messages_with_actions - # Assistant messages can include requested actions - message = ActiveAgent::ActionPrompt::Message.new( - role: :assistant, - content: "I'll search for flights to Paris for you.", - requested_actions: [ - ActiveAgent::ActionPrompt::Action.new( - name: "search", - params: { destination: "Paris", departure_date: "2024-06-15" } - ) - ] - ) - - assert message.action_requested - assert_equal 1, message.requested_actions.size - assert_equal "search", message.requested_actions.first.name - # endregion messages_with_actions - end - - test "tool messages for action responses" do - # region tool_messages - # Tool messages contain results from executed actions - tool_message = ActiveAgent::ActionPrompt::Message.new( - role: :tool, - content: "Found 5 flights to London:\n- BA 247: $599\n- AA 106: $650\n- VS 003: $720", - action_name: "search", - action_id: "call_123abc" - ) - - assert_equal :tool, tool_message.role - assert_equal "search", tool_message.action_name - assert tool_message.content.include?("Found 5 flights") - # endregion tool_messages - end - - test "building message context for prompts" do - # region message_context - # Messages form the conversation context - messages = [ - ActiveAgent::ActionPrompt::Message.new( - role: :system, - content: "You are a travel booking assistant." - ), - ActiveAgent::ActionPrompt::Message.new( - role: :user, - content: "Book me a flight to Rome" - ), - ActiveAgent::ActionPrompt::Message.new( - role: :assistant, - content: "I'll help you book a flight to Rome. When would you like to travel?" - ), - ActiveAgent::ActionPrompt::Message.new( - role: :user, - content: "Next Friday" - ) - ] - - # Pass messages as context to agents - agent = TravelAgent.with( - message: "Find flights for next Friday", - messages: messages - ) - - prompt = agent.prompt_context - # The prompt will have the existing messages plus any added by the agent - assert prompt.messages.size >= 5 # At least the messages we provided - # endregion message_context - end -end diff --git a/test/agents/multi_turn_tool_test.rb b/test/agents/multi_turn_tool_test.rb deleted file mode 100644 index 1a05f1cf..00000000 --- a/test/agents/multi_turn_tool_test.rb +++ /dev/null @@ -1,74 +0,0 @@ -require "test_helper" - -class MultiTurnToolTest < ActiveSupport::TestCase - test "agent performs tool call and continues generation with result" do - VCR.use_cassette("multi_turn_tool_basic") do - # region multi_turn_basic - message = "Add 2 and 3" - prompt = CalculatorAgent.with(message: message).prompt_context - response = prompt.generate_now - # endregion multi_turn_basic - - doc_example_output(response) - - # Verify the conversation flow - assert response.prompt.messages.size >= 5 - - # Find messages by type - system_messages = response.prompt.messages.select { |m| m.role == :system } - user_messages = response.prompt.messages.select { |m| m.role == :user } - assistant_messages = response.prompt.messages.select { |m| m.role == :assistant } - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - - # Should have system messages - assert system_messages.any?, "Should have system messages" - - # At least one system message should mention calculator if the agent has instructions - if system_messages.any? { |m| m.content.present? } - assert system_messages.any? { |m| m.content.include?("calculator") }, - "System message should mention calculator" - end - - # User message - assert_equal 1, user_messages.size - assert_equal "Add 2 and 3", user_messages.first.content - - # Assistant makes tool call and provides final answer - assert_equal 2, assistant_messages.size - assert assistant_messages.first.action_requested - assert_equal "add", assistant_messages.first.requested_actions.first.name - - # Tool response - assert_equal 1, tool_messages.size - assert_equal "5.0", tool_messages.first.content - - # Assistant provides final answer - assert_includes assistant_messages.last.content, "5" - end - end - - test "agent chains multiple tool calls for complex task" do - VCR.use_cassette("multi_turn_tool_chain") do - # region multi_turn_chain - message = "Calculate the area of a 5x10 rectangle, then multiply by 2" - prompt = CalculatorAgent.with(message: message).prompt_context - response = prompt.generate_now - # endregion multi_turn_chain - - doc_example_output(response) - - # Should have at least 2 tool calls - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - assert tool_messages.size >= 2 - - # First tool call calculates area (50) - assert_equal "50.0", tool_messages[0].content - - # Second tool call multiplies by 2 (100) - assert_equal "100.0", tool_messages[1].content - - # Final message should mention the result - assert_includes response.message.content, "100" - end - end -end diff --git a/test/agents/ollama_agent_test.rb b/test/agents/ollama_agent_test.rb deleted file mode 100644 index a5e55e5f..00000000 --- a/test/agents/ollama_agent_test.rb +++ /dev/null @@ -1,29 +0,0 @@ -require "test_helper" - -class OllamaAgentTest < ActiveSupport::TestCase - test "it renders a prompt_context and generates a response" do - VCR.use_cassette("ollama_prompt_context_response") do - message = "Show me a cat" - prompt = OllamaAgent.with(message: message).prompt_context - response = prompt.generate_now - - assert_equal message, OllamaAgent.with(message: message).prompt_context.message.content - assert_equal 3, response.prompt.messages.size - assert_equal :system, response.prompt.messages[0].role - assert_equal :user, response.prompt.messages[1].role - assert_equal message, response.prompt.messages[1].content - assert_equal :assistant, response.prompt.messages[2].role - end - end - - test "it uses the correct model" do - prompt = OllamaAgent.with(message: "Test").prompt_context - assert_equal "gemma3:latest", prompt.options[:model] - end - - test "it sets the correct system instructions" do - prompt = OllamaAgent.with(message: "Test").prompt_context - system_message = prompt.messages.find { |m| m.role == :system } - assert_equal "You're a basic Ollama agent.", system_message.content - end -end diff --git a/test/agents/open_ai_agent_test.rb b/test/agents/open_ai_agent_test.rb deleted file mode 100644 index 9593799d..00000000 --- a/test/agents/open_ai_agent_test.rb +++ /dev/null @@ -1,41 +0,0 @@ -require "test_helper" - -class OpenAIAgentTest < ActiveAgentTestCase - test "it renders a prompt_context generates a response" do - VCR.use_cassette("openai_prompt_context_response") do - message = "Show me a cat" - prompt = OpenAIAgent.with(message: message).prompt_context - response = prompt.generate_now - assert_equal message, OpenAIAgent.with(message: message).prompt_context.message.content - assert_equal 3, response.prompt.messages.size - assert_equal :system, response.prompt.messages[0].role - assert_equal :user, response.prompt.messages[1].role - assert_equal :assistant, response.prompt.messages[2].role - end - end -end - -class OpenAIClientTest < ActiveAgentTestCase - def setup - super - # Configure OpenAI before tests - OpenAI.configure do |config| - config.access_token = "test-api-key" - config.log_errors = Rails.env.development? - config.request_timeout = 600 - end - end - - test "loads configuration from environment" do - # Use empty config to test environment-based configuration - with_active_agent_config({}) do - class OpenAIClientAgent < ApplicationAgent - layout "agent" - generate_with :openai - end - - client = OpenAI::Client.new - assert_equal OpenAIClientAgent.generation_provider.access_token, client.access_token - end - end -end diff --git a/test/agents/open_router_agent_test.rb b/test/agents/open_router_agent_test.rb deleted file mode 100644 index 951a0537..00000000 --- a/test/agents/open_router_agent_test.rb +++ /dev/null @@ -1,73 +0,0 @@ -require "test_helper" - -class OpenRouterAgentTest < ActiveSupport::TestCase - test "it renders a prompt_context and generates a response" do - VCR.use_cassette("open_router_prompt_context_response") do - message = "Show me a cat" - prompt = OpenRouterAgent.with(message: message).prompt_context - response = prompt.generate_now - - assert_equal message, OpenRouterAgent.with(message: message).prompt_context.message.content - assert_equal 3, response.prompt.messages.size - assert_equal :system, response.prompt.messages[0].role - assert_equal :user, response.prompt.messages[1].role - assert_equal message, response.prompt.messages[1].content - assert_equal :assistant, response.prompt.messages[2].role - end - end - - test "it uses the correct model" do - prompt = OpenRouterAgent.with(message: "Test").prompt_context - assert_equal "qwen/qwen3-30b-a3b:free", prompt.options[:model] - end - - test "it sets the correct system instructions" do - prompt = OpenRouterAgent.with(message: "Test").prompt_context - system_message = prompt.messages.find { |m| m.role == :system } - assert_equal "You're a basic Open Router agent.", system_message.content - end - - test "it can use fallback models when configured" do - # Create a custom agent with fallback models - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o", - fallback_models: [ "anthropic/claude-3-opus", "google/gemini-pro" ], - enable_fallbacks: true - end - - # Just verify the prompt can be created with these options - prompt = agent_class.with(message: "test").prompt_context - assert_not_nil prompt - end - - test "it can configure provider preferences" do - # Create a custom agent with provider preferences - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "openai/gpt-4o", - provider: { - "order" => [ "OpenAI", "Anthropic" ], - "require_parameters" => true, - "data_collection" => "deny" - } - end - - # Just verify the prompt can be created with these options - prompt = agent_class.with(message: "test").prompt_context - assert_not_nil prompt - end - - test "it can enable transforms" do - # Create a custom agent with transforms - agent_class = Class.new(ApplicationAgent) do - generate_with :open_router, - model: "anthropic/claude-3-opus", - transforms: [ "middle-out" ] - end - - # Just verify the prompt can be created with these options - prompt = agent_class.with(message: "test").prompt_context - assert_not_nil prompt - end -end diff --git a/test/agents/open_router_integration_test.rb b/test/agents/open_router_integration_test.rb deleted file mode 100644 index 119afec6..00000000 --- a/test/agents/open_router_integration_test.rb +++ /dev/null @@ -1,525 +0,0 @@ -require "test_helper" -require "base64" -require "active_agent/action_prompt/message" - -class OpenRouterIntegrationTest < ActiveSupport::TestCase - setup do - @agent = OpenRouterIntegrationAgent.new - end - - test "analyzes image with structured output schema" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_image_analysis_structured") do - # Use the sales chart image URL for structured analysis - image_url = "https://raw.githubusercontent.com/activeagents/activeagent/refs/heads/main/test/fixtures/images/sales_chart.png" - - prompt = OpenRouterIntegrationAgent.with(image_url: image_url).analyze_image - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - - # When output_schema is present, content is already parsed - result = response.message.content - - # Verify the structure matches our schema - assert result.key?("description") - assert result.key?("objects") - assert result.key?("scene_type") - assert result.key?("primary_colors") - assert result["objects"].is_a?(Array) - assert [ "indoor", "outdoor", "abstract", "document", "photo", "illustration" ].include?(result["scene_type"]) - end - end - - test "analyzes remote image URL without structured output" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_remote_image_basic") do - # Use a landscape image URL for basic analysis - image_url = "https://picsum.photos/400/300" - - # For now, just use analyze_image without the structured output schema - # We'll get a natural language description instead of JSON - prompt = OpenRouterIntegrationAgent.with(image_url: image_url).analyze_image - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - assert response.message.content.is_a?(String) - assert response.message.content.length > 10 - # Since analyze_image uses structured output, we'll get JSON - # Just verify we got a response - # In the future, we could add a simple_analyze action without schema - - # Generate documentation example - doc_example_output(response) - end - end - - test "extracts receipt data with structured output from local file" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_receipt_extraction_local") do - # Use the test receipt image - file exists, no conditional needed - receipt_path = Rails.root.join("..", "..", "test", "fixtures", "images", "test_receipt.png") - - prompt = OpenRouterIntegrationAgent.with(image_path: receipt_path).extract_receipt_data - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - - # When output_schema is present, content is already parsed - result = response.message.content - - assert_equal result["merchant"]["name"], "Corner Mart" - assert_equal result["total"]["amount"], 14.83 - assert_equal result["items"].size, 4 - result["items"].each do |item| - assert item.key?("name") - assert item.key?("quantity") - assert item.key?("price") - end - assert_equal result["items"][0], { "name"=>"Milk", "quantity"=>1, "price"=>3.49 } - assert_equal result["items"][1], { "name"=>"Bread", "quantity"=>1, "price"=>2.29 } - assert_equal result["items"][2], { "name"=>"Apples", "quantity"=>1, "price"=>5.1 } - assert_equal result["items"][3], { "name"=>"Eggs", "quantity"=>1, "price"=>2.99 } - # Generate documentation example - doc_example_output(response) - end - end - - test "handles base64 encoded images with sales chart" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_base64_sales_chart") do - # Use the sales chart image - chart_path = Rails.root.join("..", "..", "test", "fixtures", "images", "sales_chart.png") - - prompt = OpenRouterIntegrationAgent.with(image_path: chart_path).analyze_image - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - assert_includes response.message.content, "(Q1, Q2, Q3, Q4), with varying heights indicating different sales amounts" - - # Generate documentation example - doc_example_output(response) - end - end - - test "processes PDF document from local file" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_pdf_local") do - # Use the sample resume PDF - pdf_path = Rails.root.join("..", "..", "test", "fixtures", "files", "sample_resume.pdf") - - # Read and encode the PDF as base64 - OpenRouter accepts PDFs as image_url with data URL - pdf_data = Base64.strict_encode64(File.read(pdf_path)) - - prompt = OpenRouterIntegrationAgent.with( - pdf_data: pdf_data, - prompt_text: "Extract information from this document and return as JSON", - output_schema: :resume_schema - ).analyze_pdf - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - assert response.message.content.present? - - # When output_schema is present, content is already parsed - result = response.message.content - - assert_equal result["name"], "John Doe" - assert_equal result["email"], "john.doe@example.com" - assert_equal result["phone"], "(555) 123-4567" - assert_equal result["education"].first, { "degree"=>"BS Computer Science", "institution"=>"Stanford University", "year"=>2020 } - assert_equal result["experience"].first, { "job_title"=>"Senior Software Engineer", "company"=>"TechCorp", "duration"=>"2020-2024" } - - # Generate documentation example - doc_example_output(response) - end - end - # endregion pdf_processing_local - - test "processes PDF from remote URL of resume no plugins" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_pdf_remote_no_plugin") do - pdf_url = "https://docs.activeagents.ai/sample_resume.pdf" - - prompt = OpenRouterIntegrationAgent.with( - pdf_url: pdf_url, - prompt_text: "Analyze the PDF", - output_schema: :resume_schema, - skip_plugin: true - ).analyze_pdf - - # Remote URLs are not supported without a PDF engine plugin - # OpenAI: Inputs by file URL are not supported for chat completions. Use the ResponsesAPI for this option. - # https://platform.openai.com/docs/guides/pdf-files#file-urls - # Accept either the OpenAI error directly or our wrapped error - # Suppress ruby-openai gem's error output to STDERR - error = assert_raises(ActiveAgent::GenerationProvider::Base::GenerationProviderError, OpenAI::Error) do - prompt.generate_now - end - - # Check the error message regardless of which error type was raised - error_message = error.message - assert_match(/Missing required parameter.*file_id/, error_message) - assert_match(/Provider returned error|invalid_request_error/, error_message) - end - end - - # region pdf_native_support - test "processes PDF with native model support" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_pdf_native") do - # Test with a model that might have native PDF support - # Using the native engine (charged as input tokens) - pdf_path = Rails.root.join("..", "..", "test", "fixtures", "files", "sample_resume.pdf") - pdf_data = Base64.strict_encode64(File.read(pdf_path)) - - prompt = OpenRouterIntegrationAgent.with( - pdf_data: pdf_data, - prompt_text: "Analyze this PDF document", - pdf_engine: "native" # Use native engine (charged as input tokens) - ).analyze_pdf - - # First verify the prompt has the plugins in options - assert prompt.options[:plugins].present?, "Plugins should be present in prompt options" - assert prompt.options[:fallback_models].present?, "Fallback models should be present in prompt options" - assert_equal "file-parser", prompt.options[:plugins][0][:id] - assert_equal "native", prompt.options[:plugins][0][:pdf][:engine] - - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - assert response.message.content.present? - assert_includes response.message.content, "John Doe" - - # Generate documentation example - doc_example_output(response) - end - end - # endregion pdf_native_support - - test "processes PDF without any plugin for models with built-in support" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_pdf_no_plugin") do - # Test without any plugin - for models that have built-in PDF support - pdf_url = "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf" - - prompt = OpenRouterIntegrationAgent.with( - pdf_url: pdf_url, - prompt_text: "Analyze this PDF document", - skip_plugin: true # Don't use any plugin - ).analyze_pdf - - # Verify no plugins are included when skip_plugin is true - assert_empty prompt.options[:plugins], "Should not have plugins when skip_plugin is true" - - response = prompt.generate_now - raw_response = response.raw_response - assert_equal "Google", raw_response["provider"] - assert_not_nil response - assert_not_nil response.message - assert response.message.content.present? - # Generate documentation example - doc_example_output(response) - end - end - - test "processes scanned PDF with OCR engine" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_pdf_ocr") do - # Test with the mistral-ocr engine for scanned documents - # Using a simple PDF that should be processable - pdf_url = "https://docs.activeagents.ai/sample_resume.pdf" - - prompt = OpenRouterIntegrationAgent.with( - pdf_url: pdf_url, - prompt_text: "Extract text from this PDF.", - output_schema: :resume_schema, - pdf_engine: "mistral-ocr" # OCR engine for text extraction - ).analyze_pdf - - # Verify OCR engine is specified - assert prompt.options[:plugins].present?, "Should have plugins for OCR" - assert_equal "mistral-ocr", prompt.options[:plugins][0][:pdf][:engine] - - response = prompt.generate_now - - # MUST return valid JSON - no fallback allowed - raw_response = response.raw_response - # When output_schema is present, content is already parsed - result = response.message.content - - assert_equal result["name"], "John Doe" - assert_equal result["email"], "john.doe@example.com" - assert_equal result["phone"], "(555) 123-4567" - assert_equal result["education"], [ { "degree"=>"BS Computer Science", "institution"=>"Stanford University", "year"=>2020 } ] - assert_equal result["experience"], [ { "job_title"=>"Senior Software Engineer", "company"=>"TechCorp", "duration"=>"2020-2024" } ] - - # Generate documentation example - doc_example_output(response) - end - end - - test "uses fallback models when primary fails" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_fallback_models") do - prompt = OpenRouterIntegrationAgent.test_fallback - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - - # Check metadata for fallback usage - if response.respond_to?(:metadata) && response.metadata - # Should use one of the fallback models, not the primary - possible_models = [ "openai/gpt-3.5-turbo-0301", "openai/gpt-3.5-turbo", "openai/gpt-4o-mini" ] - assert possible_models.include?(response.metadata[:model_used]) - assert response.metadata[:provider].present? - end - - # The response should still work (2+2=4) - assert response.message.content.include?("4") - - # Generate documentation example - doc_example_output(response) - end - end - - test "applies transforms for long content" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_transforms") do - # Generate a very long text - long_text = "Lorem ipsum dolor sit amet. " * 1000 - - prompt = OpenRouterIntegrationAgent.with(text: long_text).process_long_text - response = prompt.generate_now - - assert_not_nil response - assert_not_nil response.message - assert response.message.content.present? - - # The summary should be much shorter than the original - assert response.message.content.length < long_text.length / 10 - - # Generate documentation example - doc_example_output(response) - end - end - - test "tracks usage and costs" do - skip "Requires actual OpenRouter API key and credits" unless has_openrouter_credentials? - - VCR.use_cassette("openrouter_cost_tracking") do - prompt = OpenRouterIntegrationAgent.with(message: "Hello").prompt_context - response = prompt.generate_now - - assert_not_nil response - - # Check for usage information - if response.respond_to?(:usage) && response.usage - assert response.usage["prompt_tokens"].is_a?(Integer) - assert response.usage["completion_tokens"].is_a?(Integer) - assert response.usage["total_tokens"].is_a?(Integer) - end - - # Check for metadata with model information from OpenRouter - if response.respond_to?(:metadata) && response.metadata - assert response.metadata[:model_used].present? - assert response.metadata[:provider].present? - # Verify we're using the expected model (gpt-4o-mini) - assert_equal "openai/gpt-4o-mini", response.metadata[:model_used] - end - - # Generate documentation example - doc_example_output(response) - end - end - - test "includes OpenRouter headers in requests" do - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "app_name" => "TestApp", - "site_url" => "https://test.example.com" - ) - - # Get the headers that would be sent - headers = provider.send(:openrouter_headers) - - assert_equal "https://test.example.com", headers["HTTP-Referer"] - assert_equal "TestApp", headers["X-Title"] - end - - test "builds provider preferences correctly" do - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "enable_fallbacks" => true, - "provider" => { - "order" => [ "OpenAI", "Anthropic" ], - "require_parameters" => true, - "data_collection" => "deny" - } - ) - - prefs = provider.send(:build_provider_preferences) - - assert_equal [ "OpenAI", "Anthropic" ], prefs[:order] - assert_equal true, prefs[:require_parameters] - assert_equal true, prefs[:allow_fallbacks] - assert_equal "deny", prefs[:data_collection] - end - - test "configures data collection policies" do - # Test deny all data collection - provider_deny = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "data_collection" => "deny" - ) - prefs_deny = provider_deny.send(:build_provider_preferences) - assert_equal "deny", prefs_deny[:data_collection] - - # Test allow all data collection (default) - provider_allow = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o" - ) - prefs_allow = provider_allow.send(:build_provider_preferences) - assert_equal "allow", prefs_allow[:data_collection] - - # Test selective provider data collection - provider_selective = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "data_collection" => [ "OpenAI", "Google" ] - ) - prefs_selective = provider_selective.send(:build_provider_preferences) - assert_equal [ "OpenAI", "Google" ], prefs_selective[:data_collection] - end - - test "handles multimodal content correctly" do - # Create a message with multimodal content - message = ActiveAgent::ActionPrompt::Message.new( - content: [ - { type: "text", text: "What's in this image?" }, - { type: "image_url", image_url: { url: "https://example.com/image.jpg" } } - ], - role: :user - ) - - prompt = ActiveAgent::ActionPrompt::Prompt.new( - messages: [ message ] - ) - - assert prompt.multimodal? - end - - test "converts file type to image_url for OpenRouter PDF support" do - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o" - ) - - # Test file type conversion - file_item = { - type: "file", - file: { - file_data: "data:application/pdf;base64,JVBERi0xLj..." - } - } - - formatted = provider.send(:format_content_item, file_item) - - assert_equal "image_url", formatted[:type] - assert_equal "data:application/pdf;base64,JVBERi0xLj...", formatted[:image_url][:url] - end - - test "respects configuration hierarchy for site_url" do - # Test with explicit site_url config - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "site_url" => "https://configured.example.com" - ) - - assert_equal "https://configured.example.com", provider.instance_variable_get(:@site_url) - - # Test with default_url_options in config - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o", - "default_url_options" => { - "host" => "fromconfig.example.com" - } - ) - - assert_equal "fromconfig.example.com", provider.instance_variable_get(:@site_url) - end - - test "handles rate limit information in metadata" do - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o" - ) - - # Create a mock response - prompt = ActiveAgent::ActionPrompt::Prompt.new(message: "test") - response = ActiveAgent::GenerationProvider::Response.new(prompt: prompt) - - headers = { - "x-provider" => "OpenAI", - "x-model" => "gpt-4o", - "x-ratelimit-requests-limit" => "100", - "x-ratelimit-requests-remaining" => "99", - "x-ratelimit-tokens-limit" => "10000", - "x-ratelimit-tokens-remaining" => "9500" - } - - provider.send(:add_openrouter_metadata, response, headers) - - assert_equal "100", response.metadata[:ratelimit][:requests_limit] - assert_equal "99", response.metadata[:ratelimit][:requests_remaining] - assert_equal "10000", response.metadata[:ratelimit][:tokens_limit] - assert_equal "9500", response.metadata[:ratelimit][:tokens_remaining] - end - - test "includes plugins parameter when passed in options" do - provider = ActiveAgent::GenerationProvider::OpenRouterProvider.new( - "model" => "openai/gpt-4o" - ) - - # Create a prompt with plugins option - prompt = ActiveAgent::ActionPrompt::Prompt.new( - message: "test", - options: { - plugins: [ - { - id: "file-parser", - pdf: { - engine: "pdf-text" - } - } - ] - } - ) - - # Set the prompt on the provider - provider.instance_variable_set(:@prompt, prompt) - - # Build parameters and verify plugins are included - parameters = provider.send(:build_openrouter_parameters) - - assert_not_nil parameters[:plugins] - assert_equal 1, parameters[:plugins].size - assert_equal "file-parser", parameters[:plugins][0][:id] - assert_equal "pdf-text", parameters[:plugins][0][:pdf][:engine] - end -end diff --git a/test/agents/privacy_focused_agent_test.rb b/test/agents/privacy_focused_agent_test.rb deleted file mode 100644 index f5ca7c77..00000000 --- a/test/agents/privacy_focused_agent_test.rb +++ /dev/null @@ -1,80 +0,0 @@ -require "test_helper" - -class PrivacyFocusedAgentTest < ActiveSupport::TestCase - setup do - @agent = PrivacyFocusedAgent.new - end - - test "configures agent with data collection denied" do - # Verify the agent is configured with data_collection: "deny" - provider = @agent.send(:generation_provider) - - # The provider should be an OpenRouter provider - assert_kind_of ActiveAgent::GenerationProvider::OpenRouterProvider, provider - - # Create a prompt context to test with - prompt_context = @agent.prompt(message: "test") - - # Set the prompt on the provider to simulate real usage - provider.instance_variable_set(:@prompt, prompt_context) - - # Now check that data collection is properly set - prefs = provider.send(:build_provider_preferences) - assert_equal "deny", prefs[:data_collection] - end - - test "processes financial data with privacy settings" do - skip "Requires actual OpenRouter API key" unless has_openrouter_credentials? - - VCR.use_cassette("privacy_focused_financial_analysis") do - # region financial_data_test - financial_data = { - revenue: 1_000_000, - expenses: 750_000, - profit_margin: 0.25, - quarter: "Q3 2024" - } - - response = PrivacyFocusedAgent.with( - financial_data: financial_data.to_json, - analysis_type: "risk_assessment" - ).analyze_financial_data.generate_now - - assert_not_nil response - assert_not_nil response.message - assert_not_nil response.message.content - assert response.message.content.include?("risk") || response.message.content.include?("financial") - - # Verify the request was made with data_collection: "deny" - # This ensures the data won't be used for training - doc_example_output(response) - # endregion financial_data_test - end - end - - test "processes medical records with selective provider collection" do - skip "Requires actual OpenRouter API key" unless has_openrouter_credentials? - - VCR.use_cassette("privacy_focused_medical_records") do - # region medical_records_test - medical_record = { - patient_id: "REDACTED", - diagnosis: "Example condition", - treatment: "Standard protocol", - date: "2024-01-01" - } - - response = PrivacyFocusedAgent.with( - record: medical_record.to_json - ).process_medical_records.generate_now - - assert_not_nil response - assert_not_nil response.message - assert_not_nil response.message.content - - # The response should handle the medical data appropriately - doc_example_output(response) - # endregion medical_records_test - end - end -end diff --git a/test/agents/queued_generation_test.rb b/test/agents/queued_generation_test.rb deleted file mode 100644 index 9b9c682f..00000000 --- a/test/agents/queued_generation_test.rb +++ /dev/null @@ -1,29 +0,0 @@ -require "test_helper" - -class QueuedGenerationTest < ActiveSupport::TestCase - include ActiveJob::TestHelper - test "generate_later enqueues a generation job" do - # region queued_generation_generate_later - prompt = ApplicationAgent.with(message: "Process this later").prompt_context - - # Enqueue the generation job - assert_enqueued_with(job: ActiveAgent::GenerationJob) do - prompt.generate_later - end - # endregion queued_generation_generate_later - end - - test "generate_later with custom queue and priority" do - # region queued_generation_custom_queue - prompt = ApplicationAgent.with(message: "Priority task").prompt_context - - # Enqueue with specific queue and priority - assert_enqueued_with( - job: ActiveAgent::GenerationJob, - queue: "high_priority" - ) do - prompt.generate_later(queue: "high_priority", priority: 10) - end - # endregion queued_generation_custom_queue - end -end diff --git a/test/agents/scoped_agents/translation_agent_with_custom_instructions_template_test.rb b/test/agents/scoped_agents/translation_agent_with_custom_instructions_template_test.rb deleted file mode 100644 index e8cb9082..00000000 --- a/test/agents/scoped_agents/translation_agent_with_custom_instructions_template_test.rb +++ /dev/null @@ -1,57 +0,0 @@ -require "test_helper" - -class ScopedAgents::TranslationAgentWithCustomInstructionsTemplateTest < ActiveSupport::TestCase - test "it uses instructions from custom_instructions template, embedding locales and an instance variable" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate - - assert_equal "# Custom Instructions\n\ntranslation additional instruction\nTranslate the given text from English to French.\n", translate_prompt.instructions - end - - test "it uses overridden instructions for prompt" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate_with_overridden_instructions - - assert_equal "# Overridden Instructions\n\nTranslate the given text from one language to another.\n", translate_prompt.instructions - end - - test "it does not include default system method `prompt_context` in action schemas" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate_with_overridden_instructions - action_names = translate_prompt.actions.map { |a| a["function"]["name"] } - - refute_includes action_names, "prompt_context" - end - - test "it returns action schemas for user methods except the called method translate_with_overridden_instructions" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate_with_overridden_instructions - - assert_equal 1, translate_prompt.actions.size - assert_equal "translate", translate_prompt.actions[0]["function"]["name"] - end - - test "it returns action schemas for user methods except the called method translate" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate - - assert_equal 1, translate_prompt.actions.size - assert_equal "translate_with_overridden_instructions", translate_prompt.actions[0]["function"]["name"] - end - - test "it returns action schemas for all user methods when prompt_context is called" do - translate_prompt = ScopedAgents::TranslationAgentWithCustomInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).prompt_context - action_names = translate_prompt.actions.map { |a| a["function"]["name"] } - - assert_equal 2, translate_prompt.actions.size - assert_includes action_names, "translate" - assert_includes action_names, "translate_with_overridden_instructions" - end -end diff --git a/test/agents/scoped_agents/translation_agent_with_default_instructions_template_test.rb b/test/agents/scoped_agents/translation_agent_with_default_instructions_template_test.rb deleted file mode 100644 index a2f56bae..00000000 --- a/test/agents/scoped_agents/translation_agent_with_default_instructions_template_test.rb +++ /dev/null @@ -1,19 +0,0 @@ -require "test_helper" - -class ScopedAgents::TranslationAgentWithDefaultInstructionsTemplateTest < ActiveSupport::TestCase - test "it uses instructions from default instructions template" do - translate_prompt = ScopedAgents::TranslationAgentWithDefaultInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese" - ).translate - - assert_equal "# Default Instructions\n\nTranslate the given text from one language to another.\n", translate_prompt.instructions - end - - test "it uses instructions from default instructions template with dynamic param in the template" do - translate_prompt = ScopedAgents::TranslationAgentWithDefaultInstructionsTemplate.with( - message: "Hi, I'm Justin", locale: "japanese", source_language: "english" - ).translate - - assert_equal "# Default Instructions\n\nTranslate the given text from english language to another.\n", translate_prompt.instructions - end -end diff --git a/test/agents/scraping_agent_multiturn_test.rb b/test/agents/scraping_agent_multiturn_test.rb deleted file mode 100644 index fea83d2f..00000000 --- a/test/agents/scraping_agent_multiturn_test.rb +++ /dev/null @@ -1,52 +0,0 @@ -require "test_helper" - -class ScrapingAgentMultiturnTest < ActiveSupport::TestCase - test "scraping agent uses tools to check Google homepage" do - VCR.use_cassette("scraping_agent_google_check") do - response = ScrapingAgent.with( - message: "Are there any notices on the Google homepage?" - ).prompt_context.generate_now - - # Check we got a response - assert response.message.present? - assert response.message.content.present? - - # Check the final message mentions Google/homepage/notices - assert response.message.content.downcase.include?("google") || - response.message.content.downcase.include?("homepage") || - response.message.content.downcase.include?("notice"), - "Response should mention Google, homepage, or notices" - - # Check the message history shows tool usage - messages = response.prompt.messages - - # Should have system, user, assistant(s), and tool messages - assert messages.any? { |m| m.role == :system }, "Should have system message" - assert messages.any? { |m| m.role == :user }, "Should have user message" - assert messages.any? { |m| m.role == :assistant }, "Should have assistant messages" - assert messages.any? { |m| m.role == :tool }, "Should have tool messages" - - # Check tool messages have the expected structure - tool_messages = messages.select { |m| m.role == :tool } - assert tool_messages.length >= 1, "Should have at least one tool message" - - tool_messages.each do |tool_msg| - assert tool_msg.action_id.present?, "Tool message should have action_id" - assert tool_msg.action_name.present?, "Tool message should have action_name" - assert [ "visit", "read_current_page" ].include?(tool_msg.action_name), - "Tool name should be visit or read_current_page" - end - - # Verify specific tools were called - tool_names = tool_messages.map(&:action_name) - assert tool_names.include?("visit"), "Should have called visit tool" - assert tool_names.include?("read_current_page"), "Should have called read_current_page tool" - - # Tool messages in the prompt.messages array show they were executed - # The actual content is returned separately (not in these tool messages) - - # Generate documentation example - doc_example_output(response) - end - end -end diff --git a/test/agents/scraping_agent_tool_content_test.rb b/test/agents/scraping_agent_tool_content_test.rb deleted file mode 100644 index 3e9ceed6..00000000 --- a/test/agents/scraping_agent_tool_content_test.rb +++ /dev/null @@ -1,69 +0,0 @@ -require "test_helper" - -class ScrapingAgentToolContentTest < ActiveSupport::TestCase - test "tool messages should contain rendered view content" do - VCR.use_cassette("scraping_agent_tool_content") do - response = ScrapingAgent.with( - message: "Check the Google homepage" - ).prompt_context.generate_now - - # Get tool messages from the response - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - - # We expect tool messages to be present - assert tool_messages.any?, "Should have tool messages" - - # Check each tool message - tool_messages.each do |tool_msg| - # FAILING: Tool messages should have the rendered content from their views - # Currently they have empty content "" - if tool_msg.action_name == "visit" - # Should contain "Navigation resulted in 200 status code." from visit.text.erb - assert tool_msg.content.present?, - "Visit tool message should have content from visit.text.erb template" - assert tool_msg.content.include?("Navigation") || tool_msg.content.include?("200"), - "Visit tool message should contain rendered template output" - elsif tool_msg.action_name == "read_current_page" - # Should contain "Title: Google\nBody: ..." from read_current_page.text.erb - assert tool_msg.content.present?, - "Read tool message should have content from read_current_page.text.erb template" - assert tool_msg.content.include?("Title:") || tool_msg.content.include?("Body:"), - "Read tool message should contain rendered template output" - end - end - - # Also check the raw_request to see what's being sent to OpenAI - if response.raw_request - response.raw_request[:messages].select { |m| m[:role] == "tool" } - end - end - end - - test "tool action rendering should populate message content" do - agent = ScrapingAgent.new - agent.context = ActiveAgent::ActionPrompt::Prompt.new - - # Create a mock action - action = ActiveAgent::ActionPrompt::Action.new( - id: "test_visit_123", - name: "visit", - params: { url: "https://example.com" } - ) - - # Perform the action - agent.send(:perform_action, action) - - # Get the tool message that was added - tool_message = agent.context.messages.last - - assert_equal :tool, tool_message.role - assert_equal "test_visit_123", tool_message.action_id - assert_equal "visit", tool_message.action_name - - # This is the key assertion - the tool message should have the rendered content - assert tool_message.content.present?, - "Tool message should have content from the rendered view" - assert tool_message.content.include?("Navigation resulted in"), - "Tool message should contain the rendered visit.text.erb template" - end -end diff --git a/test/agents/streaming_agent_test.rb b/test/agents/streaming_agent_test.rb deleted file mode 100644 index 0f20bf22..00000000 --- a/test/agents/streaming_agent_test.rb +++ /dev/null @@ -1,66 +0,0 @@ -require "test_helper" - -class StreamingAgentTest < ActiveSupport::TestCase - test "it renders a prompt with a message" do - assert_equal "Test Streaming", StreamingAgent.with(message: "Test Streaming").prompt_context.message.content - end - - test "it uses the correct model and instructions" do - prompt = StreamingAgent.with(message: "Test").prompt_context - assert_equal "gpt-4.1-nano", prompt.options[:model] - system_message = prompt.messages.find { |m| m.role == :system } - assert_equal "You're a chat agent. Your job is to help users with their questions.", system_message.content - end - - test "it broadcasts the expected number of times for streamed chunks" do - # Mock ActionCable.server.broadcast - broadcast_calls = [] - ActionCable.server.singleton_class.class_eval do - alias_method :orig_broadcast, :broadcast - define_method(:broadcast) do |*args| - broadcast_calls << args - end - end - - VCR.use_cassette("streaming_agent_stream_response") do - # region streaming_agent_stream_response - StreamingAgent.with(message: "Stream this message").prompt_context.generate_now - # endregion streaming_agent_stream_response - end - - assert_equal 54, broadcast_calls.size - assert_equal "It looks like you'd like to stream a message.", broadcast_calls[9].last.dig(:locals, :message) - ensure - # Restore original broadcast method - ActionCable.server.singleton_class.class_eval do - alias_method :broadcast, :orig_broadcast - remove_method :orig_broadcast - end - end - - test "it broadcasts deltas" do - # Mock ActionCable.server.broadcast - broadcast_calls = [] - ActionCable.server.singleton_class.class_eval do - alias_method :orig_broadcast, :broadcast - define_method(:broadcast) do |*args| - broadcast_calls << args - end - end - - VCR.use_cassette("streaming_agent_stream_response") do - # region streaming_agent_stream_response - StreamingAgent.with(message: "Stream this message", delta: true).prompt_context.generate_now - # endregion streaming_agent_stream_response - end - - assert_equal 54, broadcast_calls.size - assert_equal ".", broadcast_calls[9].last.dig(:locals, :message) - ensure - # Restore original broadcast method - ActionCable.server.singleton_class.class_eval do - alias_method :broadcast, :orig_broadcast - remove_method :orig_broadcast - end - end -end diff --git a/test/agents/support_agent_test.rb b/test/agents/support_agent_test.rb deleted file mode 100644 index 2ab211b9..00000000 --- a/test/agents/support_agent_test.rb +++ /dev/null @@ -1,107 +0,0 @@ -# test/support_agent_test.rb -require "test_helper" - -class SupportAgentTest < ActiveSupport::TestCase - test "it renders a prompt with an 'Test' message using the Application Agent's prompt_context" do - assert_equal "Test", SupportAgent.with(message: "Test").prompt_context.message.content - end - - test "it renders a prompt_context generates a response with a tool call and performs the requested actions" do - VCR.use_cassette("support_agent_prompt_context_tool_call_response") do - # region support_agent_tool_call - message = "Show me a cat" - prompt = SupportAgent.with(message: message).prompt_context - # endregion support_agent_tool_call - assert_equal message, prompt.message.content - # region support_agent_tool_call_response - response = prompt.generate_now - # endregion support_agent_tool_call_response - - doc_example_output(response) - - # Messages include system, user, assistant, and tool messages - assert response.prompt.messages.size >= 5 - - # Group messages by role - system_messages = response.prompt.messages.select { |m| m.role == :system } - user_messages = response.prompt.messages.select { |m| m.role == :user } - assistant_messages = response.prompt.messages.select { |m| m.role == :assistant } - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - - # SupportAgent has instructions from generate_with - assert system_messages.any?, "Should have system messages" - assert_equal "You're a support agent. Your job is to help users with their questions.", - system_messages.first.content, - "System message should contain SupportAgent's generate_with instructions" - - assert_equal 1, user_messages.size - assert_equal 2, assistant_messages.size - assert_equal 1, tool_messages.size - - assert_equal response.message, response.prompt.messages.last - assert_includes tool_messages.first.content, "https://cataas.com/cat/" - end - end - - test "it generates a sematic description for vector embeddings" do - VCR.use_cassette("support_agent_tool_call") do - message = "Show me a cat" - prompt = SupportAgent.with(message: message).prompt_context - response = prompt.generate_now - assert_equal message, SupportAgent.with(message: message).prompt_context.message.content - - # Messages include system, user, assistant, and tool messages - assert response.prompt.messages.size >= 5 - - # Group messages by role - system_messages = response.prompt.messages.select { |m| m.role == :system } - user_messages = response.prompt.messages.select { |m| m.role == :user } - assistant_messages = response.prompt.messages.select { |m| m.role == :assistant } - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - - # SupportAgent has instructions from generate_with - assert system_messages.any?, "Should have system messages" - assert_equal "You're a support agent. Your job is to help users with their questions.", - system_messages.first.content, - "System message should contain SupportAgent's generate_with instructions" - - assert_equal 1, user_messages.size - assert_equal 2, assistant_messages.size - assert_equal 1, tool_messages.size - end - end - - test "it makes a tool call with streaming enabled" do - prompt = nil - prompt_message = nil - test_prompt_message = "Show me a cat" - VCR.use_cassette("support_agent_streaming_tool_call") do - prompt = SupportAgent.with(message: test_prompt_message).prompt_context - prompt_message = prompt.message.content - end - - VCR.use_cassette("support_agent_streaming_tool_call_response") do - response = prompt.generate_now - assert_equal test_prompt_message, prompt_message - - # Messages include system, user, assistant, and tool messages - assert response.prompt.messages.size >= 5 - - # Group messages by role - system_messages = response.prompt.messages.select { |m| m.role == :system } - user_messages = response.prompt.messages.select { |m| m.role == :user } - assistant_messages = response.prompt.messages.select { |m| m.role == :assistant } - tool_messages = response.prompt.messages.select { |m| m.role == :tool } - - # SupportAgent has instructions from generate_with - assert system_messages.any?, "Should have system messages" - assert_equal "You're a support agent. Your job is to help users with their questions.", - system_messages.first.content, - "System message should contain SupportAgent's generate_with instructions" - - assert_equal 1, user_messages.size - assert_equal 2, assistant_messages.size - assert_equal 1, tool_messages.size - end - end -end diff --git a/test/agents/tool_calling_agent_test.rb b/test/agents/tool_calling_agent_test.rb deleted file mode 100644 index 066484d5..00000000 --- a/test/agents/tool_calling_agent_test.rb +++ /dev/null @@ -1,101 +0,0 @@ -require "test_helper" - -class ToolCallingAgentTest < ActiveSupport::TestCase - test "agent can make multiple tool calls in sequence until completion" do - VCR.use_cassette("tool_calling_agent_multi_turn", record: :new_episodes) do - # region multi_turn_tool_call - message = "Calculate the area of a rectangle with width 5 and height 10, then double it" - prompt = CalculatorAgent.with(message: message).prompt_context - response = prompt.generate_now - # endregion multi_turn_tool_call - - doc_example_output(response) - - # Messages should include system messages first, then user, assistant, and tool messages - assert response.prompt.messages.size >= 5 - - # System messages should be first (multiple empty ones may be added during prompt flow) - system_count = 0 - response.prompt.messages.each_with_index do |msg, i| - break if msg.role != :system - system_count = i + 1 - end - assert system_count >= 1, "Should have at least one system message at the beginning" - - # After system messages, should have user message - user_index = system_count - assert_equal :user, response.prompt.messages[user_index].role - assert_includes response.prompt.messages[user_index].content, "Calculate the area" - - # Then assistant message with tool call - assistant_index = user_index + 1 - assert_equal :assistant, response.prompt.messages[assistant_index].role - assert response.prompt.messages[assistant_index].action_requested - - # Then tool result - tool_index = assistant_index + 1 - assert_equal :tool, response.prompt.messages[tool_index].role - assert_equal "50.0", response.prompt.messages[tool_index].content - - # If there are more tool calls for doubling - if response.prompt.messages.size > tool_index + 2 - assert_equal :assistant, response.prompt.messages[tool_index + 1].role - assert_equal :tool, response.prompt.messages[tool_index + 2].role - assert_equal "100.0", response.prompt.messages[tool_index + 2].content - end - end - end - - test "agent can render views from tool calls" do - VCR.use_cassette("tool_calling_agent_view_render") do - # region tool_call_with_view - message = "Show me the current weather report" - prompt = WeatherAgent.with(message: message).prompt_context - response = prompt.generate_now - # endregion tool_call_with_view - - doc_example_output(response) - - # Check that view was rendered as tool result - tool_message = response.prompt.messages.find { |m| m.role == :tool } - assert_not_nil tool_message - assert_includes tool_message.content, "Searching for flights from <%= @departure %> to <%= @destination %>
- -Would you like to book any of these flights?
\ No newline at end of file diff --git a/test/dummy/app/views/travel_agent/search.json.jbuilder b/test/dummy/app/views/travel_agent/search.json.jbuilder deleted file mode 100644 index b4d45d35..00000000 --- a/test/dummy/app/views/travel_agent/search.json.jbuilder +++ /dev/null @@ -1,23 +0,0 @@ -json.type :function -json.function do - json.name "search" - json.description "Search for available flights to a destination" - json.parameters do - json.type :object - json.properties do - json.departure do - json.type :string - json.description "Departure city or airport code" - end - json.destination do - json.type :string - json.description "Destination city or airport code" - end - json.date do - json.type :string - json.description "Travel date in YYYY-MM-DD format" - end - end - json.required [ "destination" ] - end -end diff --git a/test/dummy/app/views/travel_agent/search.text.erb b/test/dummy/app/views/travel_agent/search.text.erb deleted file mode 100644 index 7a5a31d6..00000000 --- a/test/dummy/app/views/travel_agent/search.text.erb +++ /dev/null @@ -1,11 +0,0 @@ -Travel Search Results -==================== - -Searching for flights from <%= @departure %> to <%= @destination %> - -Available flights: -<% @results.each_with_index do |flight, i| %> -<%= i + 1 %>. <%= flight[:airline] %> - $<%= flight[:price] %> (Departure: <%= flight[:departure] %>) -<% end %> - -Please let me know which flight you'd like to book. \ No newline at end of file diff --git a/test/dummy/app/views/view_test/agents/test/other_instructions.text.erb b/test/dummy/app/views/view_test/agents/test/other_instructions.text.erb new file mode 100644 index 00000000..3cb0579a --- /dev/null +++ b/test/dummy/app/views/view_test/agents/test/other_instructions.text.erb @@ -0,0 +1 @@ +Hash <%= detail %> instructions template diff --git a/test/dummy/app/views/view_test/agents/test_instructions_markdown/instructions.md.erb b/test/dummy/app/views/view_test/agents/test_instructions_markdown/instructions.md.erb new file mode 100644 index 00000000..204f1a9d --- /dev/null +++ b/test/dummy/app/views/view_test/agents/test_instructions_markdown/instructions.md.erb @@ -0,0 +1,3 @@ +# Test Instructions Markdown + +This is a test instructions file in markdown format. diff --git a/test/dummy/app/views/view_test/agents/test_instructions_text/instructions.text.erb b/test/dummy/app/views/view_test/agents/test_instructions_text/instructions.text.erb new file mode 100644 index 00000000..67062480 --- /dev/null +++ b/test/dummy/app/views/view_test/agents/test_instructions_text/instructions.text.erb @@ -0,0 +1 @@ +This is a test instructions file in text format. diff --git a/test/dummy/app/views/weather_agent/convert_temperature.json.jbuilder b/test/dummy/app/views/weather_agent/convert_temperature.json.jbuilder deleted file mode 100644 index 38b8cb13..00000000 --- a/test/dummy/app/views/weather_agent/convert_temperature.json.jbuilder +++ /dev/null @@ -1,22 +0,0 @@ -json.name action_name -json.description "Convert temperature between Celsius and Fahrenheit" -json.parameters do - json.type "object" - json.properties do - json.value do - json.type "number" - json.description "Temperature value to convert" - end - json.from do - json.type "string" - json.description "Unit to convert from (celsius or fahrenheit)" - json.enum [ "celsius", "fahrenheit" ] - end - json.to do - json.type "string" - json.description "Unit to convert to (celsius or fahrenheit)" - json.enum [ "celsius", "fahrenheit" ] - end - end - json.required [ "value", "from", "to" ] -end diff --git a/test/dummy/app/views/weather_agent/get_temperature.json.jbuilder b/test/dummy/app/views/weather_agent/get_temperature.json.jbuilder deleted file mode 100644 index 4ba13560..00000000 --- a/test/dummy/app/views/weather_agent/get_temperature.json.jbuilder +++ /dev/null @@ -1,6 +0,0 @@ -json.name action_name -json.description "Get the current temperature in Celsius" -json.parameters do - json.type "object" - json.properties({}) -end diff --git a/test/dummy/app/views/weather_agent/get_weather_report.json.jbuilder b/test/dummy/app/views/weather_agent/get_weather_report.json.jbuilder deleted file mode 100644 index be684e5f..00000000 --- a/test/dummy/app/views/weather_agent/get_weather_report.json.jbuilder +++ /dev/null @@ -1,6 +0,0 @@ -json.name action_name -json.description "Get a detailed weather report with HTML formatting" -json.parameters do - json.type "object" - json.properties({}) -end diff --git a/test/dummy/app/views/weather_agent/weather_report.html.erb b/test/dummy/app/views/weather_agent/weather_report.html.erb deleted file mode 100644 index 6bfff0fc..00000000 --- a/test/dummy/app/views/weather_agent/weather_report.html.erb +++ /dev/null @@ -1,7 +0,0 @@ -