Skip to content

Commit 6cfdc9d

Browse files
committed
Merge branch 'main' into updates/screenshots-nov-2025
Resolved conflicts in tool-variations.mdx by keeping the main branch's video tag format with autoplay, loop, and accessibility features.
2 parents e5f9d0e + 8fc7fc8 commit 6cfdc9d

File tree

4 files changed

+134
-31
lines changed

4 files changed

+134
-31
lines changed

docs/gram/build-mcp/dynamic-toolsets.mdx

Lines changed: 39 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,54 +5,73 @@ description: Enable very large MCP servers by making a toolset dynamic
55

66
import { Callout } from "@/mdx/components";
77

8-
Dynamic toolsets enable very large MCP servers without overloading context windows. Instead of exposing all tools upfront like traditional MCP, dynamic toolsets provide "meta" tools that allow the LLM to discover only the tools it needs to complete specific tasks, optimizing token and context management.
8+
Dynamic toolsets enable very large MCP servers without overloading context windows. Instead of exposing all tools upfront like traditional MCP, dynamic toolsets provide "meta" tools that allow the LLM to discover only the tools it needs to complete specific tasks, delivering up to 160x token reduction while maintaining full functionality.
99

10-
Gram exposes two types of dynamic toolsets, both of which are experimental:
10+
Our refined Dynamic Toolsets approach combines the best of semantic search and progressive discovery into a unified system that exposes three core tools. For detailed technical insights and performance benchmarks, see our [blog post on how we reduced token usage by 100x](/blog/how-we-reduced-token-usage-by-100x-dynamic-toolsets-v2).
1111

12-
## Progressive Search
12+
## How Dynamic Toolsets work
1313

14-
Progressive Search uses a "progressive discovery" approach to surface tools. Tools are organized into groups that the LLM can inspect to gradually discover what tools are available to it. Details of tools are only exposed when needed, for example tool schemas (which represent a large portion of tool token use) are only surfaced when the LLM decides it actually wants to use a specific tool. The toolset is compressed into three tools that actually get exposed directly to the LLM:
14+
Dynamic toolsets follow the natural workflow an LLM needs: search → describe → execute. The system compresses large toolsets into three meta-tools:
1515

16-
### `list_tools`
16+
### `search_tools`
1717

18-
The LLM can discover available tools using prefix-based lookup (e.g., `list_tools(/hubspot/deals/*)`). This process is accelerated by providing the structure of available tools in the tool description, creating a hierarchy of available sources and tags. This allows the LLM full control over what tools it discovers and when.
18+
The LLM searches for relevant tools using natural language queries with embeddings-based semantic search. The tool description includes categorical overviews of available tools (e.g., "This toolset includes HubSpot CRM operations, deal management...") and supports filtering by tags like `source:hubspot` for precise discovery.
1919

2020
### `describe_tools`
2121

22-
The LLM can look up detailed information about specific tools, including input schemas. While this could be combined with `list_tools`, the input schemas represent a significant portion of tokens, so keeping them separate optimizes token and context management at the cost of speed.
22+
The LLM requests detailed schemas and documentation only for tools it intends to use. This separation optimizes token usage since input schemas represent 60-80% of total tokens in static toolsets.
2323

2424
### `execute_tool`
2525

26-
Execute the discovered and described tools as needed for the specific task.
26+
The LLM executes discovered and described tools with proper parameters.
2727

28-
## Semantic Search
28+
## Performance benefits
2929

30-
Semantic Search provides an embeddings-based approach to tool discovery. Embeddings are created in advance for all the tools in a toolset, then searched over to find relevant tools for a given task.
30+
Dynamic toolsets deliver significant advantages over static toolsets:
3131

32-
### `find_tools`
32+
**Massive token reduction**: Input tokens are reduced by an average of 96% for simple tasks and 91% for complex tasks, with total token usage dropping by 96% and 90% respectively.
3333

34-
The LLM can execute semantic search over embeddings created from all tools in the toolset, allowing for more intuitive tool discovery based on natural language descriptions of what it wants to accomplish. This is generally faster than Progressive Search especially for large toolsets, but has less complete coverage and may result in worse discovery. The LLM has no insight into what tools are available broadly and can only operate off of whatever the semantic search returns.
34+
**Consistent scaling**: Token usage remains relatively constant regardless of toolset size. A 400-tool dynamic toolset uses only ~8,000 tokens initially compared to 410,000+ for the same static toolset.
3535

36-
### `execute_tool`
36+
**Context window compatibility**: Large toolsets that exceed Claude's 200k context window limit with static approaches work seamlessly with dynamic toolsets.
37+
38+
**Perfect reliability**: Maintains 100% success rates across all toolset sizes and task complexities.
39+
40+
### Sample performance data
41+
42+
| Toolset Size | Mode | Simple Task Tokens | Tool Calls | Complex Task Tokens | Tool Calls |
43+
|-------------|------|-------------------|------------|-------------------|------------|
44+
| 100 tools | Static | 159,218 | 1 | 159,216 | 3 |
45+
| 100 tools | Dynamic | 8,401 | 3 | 18,095 | 7 |
46+
| 400 tools | Static | 410,738 | 1 | 410,661 | 3 |
47+
| 400 tools | Dynamic | 8,421 | 3 | 31,355 | 7.8 |
48+
49+
## Trade-offs
3750

38-
Execute the tools found through semantic search.
51+
While dynamic toolsets offer significant benefits, there are some considerations:
3952

40-
## Benefits
53+
**Increased tool calls**: Dynamic toolsets require 2-3x more tool calls (typically 6-8 for complex tasks vs 3 for static), following the search → describe → execute pattern.
4154

42-
Both dynamic toolset approaches share the same core benefit: they avoid dumping all tools into context upfront. Instead, they expose the LLM to only the tools actually needed for a given task, making it possible to work with very large toolsets while maintaining efficient context usage.
55+
**Potential latency**: Additional tool calls may introduce slight latency, though this is often offset by reduced token processing time.
4356

44-
This approach is particularly valuable when working with extensive APIs or large collections of tools where loading everything at once would exceed context limits or create unnecessary complexity.
57+
**Complexity**: The multi-step discovery process adds complexity compared to direct tool access, though this is handled automatically by the LLM.
4558

4659
## Enabling dynamic toolsets
4760

48-
Head to the `MCP` tab to switch your toolset to one of the above dynamic modes.
61+
Head to the **MCP** tab in your Gram dashboard and switch your toolset from "Static" to "Dynamic" mode.
4962

5063
<Callout title="Note" type="info">
51-
This setting only applies to MCP, and will not affect how your toolset is used in the playground.
64+
This setting only applies to MCP and will not affect how your toolset is used in the playground, where static tool exposure remains useful for testing and development.
5265
</Callout>
5366

54-
![enabling dynamic toolsets](/assets/docs/gram/img/dashboard/tool-selection-mode.png)
67+
Dynamic toolsets are particularly valuable for:
68+
- APIs with 100+ operations
69+
- Enterprise systems with comprehensive toolsets
70+
- Applications where context window limits are a concern
71+
- Production environments requiring predictable costs
5572

5673
## Additional reading
5774

75+
- [How we reduced token usage by 100x with Dynamic Toolsets](/blog/how-we-reduced-token-usage-by-100x-dynamic-toolsets-v2)
5876
- [Code Execution with MCP](https://www.anthropic.com/engineering/code-execution-with-mcp)
77+
- [Previous Dynamic Toolsets implementation](/blog/100x-token-reduction-dynamic-toolsets)

docs/gram/clients/using-claude-code-with-gram-mcp-servers.mdx

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,19 @@ gram install claude-code --toolset taskmaster
5555
This command automatically:
5656
- Fetches your toolset configuration from Gram
5757
- Derives the MCP URL and authentication settings
58-
- Creates the correct configuration (by default in user-level `~/.claude/settings.local.json`)
58+
- Creates the correct configuration (by default in user-level `~/.claude.json`)
5959

60-
**Configuration Scopes:**
60+
#### Setting up environment variables
61+
62+
If your toolset requires authentication, you'll need to set up environment variables. The `gram install` command will display the required variable names and provide the export command you need to run to set the variable value.
63+
64+
For the Taskmaster toolset, you'll need to set the `MCP_TASK_MASTER_API_KEY` environment variable to your Taskmaster API key. You can do this by running the following command:
65+
66+
```bash
67+
export MCP_TASK_MASTER_API_KEY='your-api-key-value'
68+
```
69+
70+
#### Configuration Scopes
6171

6272
You can control where the MCP server configuration is installed using the `--scope` flag:
6373

@@ -211,4 +221,3 @@ If Claude Code isn't calling the tools:
211221
You now have Claude Code connected to a Gram-hosted MCP server with task management capabilities.
212222

213223
Ready to build your own MCP server? [Try Gram today](/product/gram) and see how easy it is to turn any API into agent-ready tools.
214-

docs/gram/concepts/tool-variations.md renamed to docs/gram/concepts/tool-variations.mdx

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,18 @@ Under the **Tools** tab for a toolset, you can edit the name and description of
1818

1919
To edit a tool's name, click the 3 dots and select **Edit name**. Update the tool name in the modal that opens.
2020

21-
<video width="600" controls>
22-
<source src="/assets/docs/gram/videos/tool-variations/editing-tool-name.mp4" type="video/mp4" />
23-
Your browser does not support the video tag.
21+
<video controls={false} aria-label="Editing a tool name" loop={true} autoPlay={true} muted={true} width="100%">
22+
<source
23+
src="/assets/docs/gram/videos/tool-variations/editing-tool-name.mp4"
24+
type="video/mp4"
25+
/>
2426
</video>
2527

2628
Similarly, to edit a tool's description, click the 3 dots and select **Edit description**. Use the dedicated modal to validate and save your updated description:
2729

28-
<video width="600" controls>
29-
<source src="/assets/docs/gram/videos/tool-variations/editing-tool-description.mp4" type="video/mp4" />
30-
Your browser does not support the video tag.
30+
<video controls={false} aria-label="Editing a tool description" loop={true} autoPlay={true} muted={true} width="100%">
31+
<source
32+
src="/assets/docs/gram/videos/tool-variations/editing-tool-description.mp4"
33+
type="video/mp4"
34+
/>
3135
</video>

docs/gram/examples/using-environments-with-vercel-ai-sdk.mdx

Lines changed: 73 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,27 @@ description: |
44
Learn how to use Gram environments with the Vercel AI SDK to manage authentication and configuration.
55
---
66

7-
When initializing the Gram SDKs, specify which environment to use. The tools passed to the LLM will be bound to that environment.
7+
[Environments](/docs/gram/concepts/environments) in Gram manage authentication credentials, API keys, and configuration variables for [toolsets](/docs/gram/concepts/toolsets). When building agent applications with the Vercel AI SDK or other AI frameworks, environments provide a secure way to manage the credentials needed to execute tools.
8+
9+
## Overview
10+
11+
Environments enable secure credential management without hardcoding sensitive information. They work by:
12+
13+
- Storing credentials and configuration in Gram's secure environment system
14+
- Binding tools to a specific environment at initialization
15+
- Executing all tool calls using the credentials from that environment
16+
17+
This approach is particularly useful for:
18+
19+
- **Multi-environment deployments**: Separate development, staging, and production credentials
20+
- **Team collaboration**: Share toolsets without exposing credentials
21+
- **Enterprise security**: Centralized credential management and rotation
22+
23+
Learn more about [environment concepts](/docs/gram/concepts/environments) and how to [configure environments](/docs/gram/gram-functions/configuring-environments).
24+
25+
## Using environments with Vercel AI SDK
26+
27+
The Vercel AI SDK integration demonstrates how to bind tools to a specific environment. The `environment` parameter in the `tools()` method determines which credentials the tools will use at runtime.
828

929
```ts filename="vercel-example.ts" {15}
1030
import { generateText } from "ai";
@@ -34,6 +54,14 @@ const result = await generateText({
3454
console.log(result.text);
3555
```
3656

57+
In this example, all tools in the `marketing` toolset execute using credentials from the `production` environment. This keeps production API keys secure and separate from development credentials.
58+
59+
See the [Vercel AI SDK integration guide](/docs/gram/api-clients/using-vercel-ai-sdk-with-gram-mcp-servers) for complete setup instructions.
60+
61+
## Using environments with OpenAI Agents SDK
62+
63+
The OpenAI Agents SDK follows the same pattern, binding tools to an environment at initialization:
64+
3765
```py filename="openai-agents-example.py" {17}
3866
import asyncio
3967
import os
@@ -68,7 +96,19 @@ if __name__ == "__main__":
6896
asyncio.run(main())
6997
```
7098

71-
Environments are not required to use the Gram SDKs. You can pass the necessary variables directly when creating an instance. This is useful when users of your Gram toolsets prefer to use their own credentials.
99+
The Python SDK provides the same environment binding capabilities, ensuring consistent credential management across different programming languages.
100+
101+
Learn more about [using the OpenAI Agents SDK with Gram](/docs/gram/api-clients/using-openai-agents-sdk-with-gram-mcp-servers).
102+
103+
## Bring your own environment variables
104+
105+
For applications where end users provide their own credentials, pass environment variables directly to the SDK adapter instead of referencing a managed environment. This is common when:
106+
107+
- Building multi-tenant applications where each user has their own API keys
108+
- Creating developer tools where users supply their own credentials
109+
- Distributing applications that integrate with user-owned services
110+
111+
### TypeScript example
72112

73113
```ts filename="byo-env-vars.ts" {5-8}
74114
import { VercelAdapter } from "@gram-ai/sdk/vercel";
@@ -82,6 +122,10 @@ const vercelAdapter = new VercelAdapter({
82122
});
83123
```
84124

125+
The `environmentVariables` object accepts any key-value pairs that the toolset's functions require. This approach bypasses Gram's environment system entirely, using credentials provided at runtime instead.
126+
127+
### Python example
128+
85129
```py filename="byo-env-vars.py" {6-9}
86130
import os
87131
from gram_ai.openai_agents import GramOpenAIAgents
@@ -94,3 +138,30 @@ gram = GramOpenAIAgents(
94138
}
95139
)
96140
```
141+
142+
Both SDK adapters support the same `environment_variables` parameter, providing flexibility in how credentials are managed.
143+
144+
## Choosing between environments and direct variables
145+
146+
Consider these factors when deciding between managed environments and direct variables:
147+
148+
**Use managed environments when:**
149+
- Deploying applications with fixed credentials
150+
- Managing multiple environments (dev, staging, production)
151+
- Sharing toolsets across teams without exposing credentials
152+
- Implementing centralized credential rotation
153+
154+
**Use direct variables when:**
155+
- Building multi-tenant applications
156+
- Creating end-user tools where users supply credentials
157+
- Testing with dynamic credential sets
158+
- Implementing user-specific customizations
159+
160+
Both approaches can be combined in the same application for different use cases.
161+
162+
## Next steps
163+
164+
- Learn about [environment concepts and management](/docs/gram/concepts/environments)
165+
- Explore [other AI framework integrations](/docs/gram/api-clients/using-anthropic-api-with-gram-mcp-servers)
166+
- Set up [API keys](/docs/gram/concepts/api-keys) for Gram SDK access
167+
- Read about [toolset organization](/docs/gram/concepts/toolsets)

0 commit comments

Comments
 (0)