You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/gram/build-mcp/dynamic-toolsets.mdx
+39-20Lines changed: 39 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,54 +5,73 @@ description: Enable very large MCP servers by making a toolset dynamic
5
5
6
6
import { Callout } from"@/mdx/components";
7
7
8
-
Dynamic toolsets enable very large MCP servers without overloading context windows. Instead of exposing all tools upfront like traditional MCP, dynamic toolsets provide "meta" tools that allow the LLM to discover only the tools it needs to complete specific tasks, optimizing token and context management.
8
+
Dynamic toolsets enable very large MCP servers without overloading context windows. Instead of exposing all tools upfront like traditional MCP, dynamic toolsets provide "meta" tools that allow the LLM to discover only the tools it needs to complete specific tasks, delivering up to 160x token reduction while maintaining full functionality.
9
9
10
-
Gram exposes two types of dynamic toolsets, both of which are experimental:
10
+
Our refined Dynamic Toolsets approach combines the best of semantic search and progressive discovery into a unified system that exposes three core tools. For detailed technical insights and performance benchmarks, see our [blog post on how we reduced token usage by 100x](/blog/how-we-reduced-token-usage-by-100x-dynamic-toolsets-v2).
11
11
12
-
## Progressive Search
12
+
## How Dynamic Toolsets work
13
13
14
-
Progressive Search uses a "progressive discovery" approach to surface tools. Tools are organized into groups that the LLM can inspect to gradually discover what tools are available to it. Details of tools are only exposed when needed, for example tool schemas (which represent a large portion of tool token use) are only surfaced when the LLM decides it actually wants to use a specific tool. The toolset is compressed into three tools that actually get exposed directly to the LLM:
14
+
Dynamic toolsets follow the natural workflow an LLM needs: search → describe → execute. The system compresses large toolsets into three meta-tools:
15
15
16
-
### `list_tools`
16
+
### `search_tools`
17
17
18
-
The LLM can discover available tools using prefix-based lookup (e.g., `list_tools(/hubspot/deals/*)`). This process is accelerated by providing the structure of available tools in the tool description, creating a hierarchy of available sources and tags. This allows the LLM full control over what tools it discovers and when.
18
+
The LLM searches for relevant tools using natural language queries with embeddings-based semantic search. The tool description includes categorical overviews of available tools (e.g., "This toolset includes HubSpot CRM operations, deal management...") and supports filtering by tags like `source:hubspot` for precise discovery.
19
19
20
20
### `describe_tools`
21
21
22
-
The LLM can look up detailed information about specific tools, including input schemas. While this could be combined with `list_tools`, the input schemas represent a significant portion of tokens, so keeping them separate optimizes token and context management at the cost of speed.
22
+
The LLM requests detailed schemas and documentation only for tools it intends to use. This separation optimizes token usage since input schemas represent 60-80% of total tokens in static toolsets.
23
23
24
24
### `execute_tool`
25
25
26
-
Execute the discovered and described tools as needed for the specific task.
26
+
The LLM executes discovered and described tools with proper parameters.
27
27
28
-
## Semantic Search
28
+
## Performance benefits
29
29
30
-
Semantic Search provides an embeddings-based approach to tool discovery. Embeddings are created in advance for all the tools in a toolset, then searched over to find relevant tools for a given task.
30
+
Dynamic toolsets deliver significant advantages over static toolsets:
31
31
32
-
### `find_tools`
32
+
**Massive token reduction**: Input tokens are reduced by an average of 96% for simple tasks and 91% for complex tasks, with total token usage dropping by 96% and 90% respectively.
33
33
34
-
The LLM can execute semantic search over embeddings created from all tools in the toolset, allowing for more intuitive tool discovery based on natural language descriptions of what it wants to accomplish. This is generally faster than Progressive Search especially for large toolsets, but has less complete coverage and may result in worse discovery. The LLM has no insight into what tools are available broadly and can only operate off of whatever the semantic search returns.
34
+
**Consistent scaling**: Token usage remains relatively constant regardless of toolset size. A 400-tool dynamic toolset uses only ~8,000 tokens initially compared to 410,000+ for the same static toolset.
35
35
36
-
### `execute_tool`
36
+
**Context window compatibility**: Large toolsets that exceed Claude's 200k context window limit with static approaches work seamlessly with dynamic toolsets.
37
+
38
+
**Perfect reliability**: Maintains 100% success rates across all toolset sizes and task complexities.
While dynamic toolsets offer significant benefits, there are some considerations:
39
52
40
-
## Benefits
53
+
**Increased tool calls**: Dynamic toolsets require 2-3x more tool calls (typically 6-8 for complex tasks vs 3 for static), following the search → describe → execute pattern.
41
54
42
-
Both dynamic toolset approaches share the same core benefit: they avoid dumping all tools into context upfront. Instead, they expose the LLM to only the tools actually needed for a given task, making it possible to work with very large toolsets while maintaining efficient context usage.
55
+
**Potential latency**: Additional tool calls may introduce slight latency, though this is often offset by reduced token processing time.
43
56
44
-
This approach is particularly valuable when working with extensive APIs or large collections of tools where loading everything at once would exceed context limits or create unnecessary complexity.
57
+
**Complexity**: The multi-step discovery process adds complexity compared to direct tool access, though this is handled automatically by the LLM.
45
58
46
59
## Enabling dynamic toolsets
47
60
48
-
Head to the `MCP` tab to switch your toolset to one of the above dynamic modes.
61
+
Head to the **MCP** tab in your Gram dashboard and switch your toolset from "Static" to "Dynamic" mode.
49
62
50
63
<Callouttitle="Note"type="info">
51
-
This setting only applies to MCP, and will not affect how your toolset is used in the playground.
64
+
This setting only applies to MCP and will not affect how your toolset is used in the playground, where static tool exposure remains useful for testing and development.
- Creates the correct configuration (by default in user-level `~/.claude/settings.local.json`)
58
+
- Creates the correct configuration (by default in user-level `~/.claude.json`)
59
59
60
-
**Configuration Scopes:**
60
+
#### Setting up environment variables
61
+
62
+
If your toolset requires authentication, you'll need to set up environment variables. The `gram install` command will display the required variable names and provide the export command you need to run to set the variable value.
63
+
64
+
For the Taskmaster toolset, you'll need to set the `MCP_TASK_MASTER_API_KEY` environment variable to your Taskmaster API key. You can do this by running the following command:
Similarly, to edit a tool's description, click the 3 dots and select **Edit description**. Use the dedicated modal to validate and save your updated description:
Copy file name to clipboardExpand all lines: docs/gram/examples/using-environments-with-vercel-ai-sdk.mdx
+73-2Lines changed: 73 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,27 @@ description: |
4
4
Learn how to use Gram environments with the Vercel AI SDK to manage authentication and configuration.
5
5
---
6
6
7
-
When initializing the Gram SDKs, specify which environment to use. The tools passed to the LLM will be bound to that environment.
7
+
[Environments](/docs/gram/concepts/environments) in Gram manage authentication credentials, API keys, and configuration variables for [toolsets](/docs/gram/concepts/toolsets). When building agent applications with the Vercel AI SDK or other AI frameworks, environments provide a secure way to manage the credentials needed to execute tools.
8
+
9
+
## Overview
10
+
11
+
Environments enable secure credential management without hardcoding sensitive information. They work by:
12
+
13
+
- Storing credentials and configuration in Gram's secure environment system
14
+
- Binding tools to a specific environment at initialization
15
+
- Executing all tool calls using the credentials from that environment
16
+
17
+
This approach is particularly useful for:
18
+
19
+
-**Multi-environment deployments**: Separate development, staging, and production credentials
20
+
-**Team collaboration**: Share toolsets without exposing credentials
21
+
-**Enterprise security**: Centralized credential management and rotation
22
+
23
+
Learn more about [environment concepts](/docs/gram/concepts/environments) and how to [configure environments](/docs/gram/gram-functions/configuring-environments).
24
+
25
+
## Using environments with Vercel AI SDK
26
+
27
+
The Vercel AI SDK integration demonstrates how to bind tools to a specific environment. The `environment` parameter in the `tools()` method determines which credentials the tools will use at runtime.
8
28
9
29
```ts filename="vercel-example.ts" {15}
10
30
import { generateText } from"ai";
@@ -34,6 +54,14 @@ const result = await generateText({
34
54
console.log(result.text);
35
55
```
36
56
57
+
In this example, all tools in the `marketing` toolset execute using credentials from the `production` environment. This keeps production API keys secure and separate from development credentials.
58
+
59
+
See the [Vercel AI SDK integration guide](/docs/gram/api-clients/using-vercel-ai-sdk-with-gram-mcp-servers) for complete setup instructions.
60
+
61
+
## Using environments with OpenAI Agents SDK
62
+
63
+
The OpenAI Agents SDK follows the same pattern, binding tools to an environment at initialization:
64
+
37
65
```py filename="openai-agents-example.py" {17}
38
66
import asyncio
39
67
import os
@@ -68,7 +96,19 @@ if __name__ == "__main__":
68
96
asyncio.run(main())
69
97
```
70
98
71
-
Environments are not required to use the Gram SDKs. You can pass the necessary variables directly when creating an instance. This is useful when users of your Gram toolsets prefer to use their own credentials.
99
+
The Python SDK provides the same environment binding capabilities, ensuring consistent credential management across different programming languages.
100
+
101
+
Learn more about [using the OpenAI Agents SDK with Gram](/docs/gram/api-clients/using-openai-agents-sdk-with-gram-mcp-servers).
102
+
103
+
## Bring your own environment variables
104
+
105
+
For applications where end users provide their own credentials, pass environment variables directly to the SDK adapter instead of referencing a managed environment. This is common when:
106
+
107
+
- Building multi-tenant applications where each user has their own API keys
108
+
- Creating developer tools where users supply their own credentials
109
+
- Distributing applications that integrate with user-owned services
@@ -82,6 +122,10 @@ const vercelAdapter = new VercelAdapter({
82
122
});
83
123
```
84
124
125
+
The `environmentVariables` object accepts any key-value pairs that the toolset's functions require. This approach bypasses Gram's environment system entirely, using credentials provided at runtime instead.
126
+
127
+
### Python example
128
+
85
129
```py filename="byo-env-vars.py" {6-9}
86
130
import os
87
131
from gram_ai.openai_agents import GramOpenAIAgents
@@ -94,3 +138,30 @@ gram = GramOpenAIAgents(
94
138
}
95
139
)
96
140
```
141
+
142
+
Both SDK adapters support the same `environment_variables` parameter, providing flexibility in how credentials are managed.
143
+
144
+
## Choosing between environments and direct variables
145
+
146
+
Consider these factors when deciding between managed environments and direct variables:
0 commit comments