|  | 
|  | 1 | +# Examples | 
|  | 2 | + | 
|  | 3 | +This directory contains example scripts and interactive tools for exploring the Llama Stack Client Python SDK. | 
|  | 4 | + | 
|  | 5 | +## Interactive Agent CLI | 
|  | 6 | + | 
|  | 7 | +`interactive_agent_cli.py` - An interactive command-line tool for exploring agent turn/step events with server-side tools. | 
|  | 8 | + | 
|  | 9 | +### Features | 
|  | 10 | + | 
|  | 11 | +- 🔍 **File Search Integration**: Automatically sets up a vector store with sample knowledge base | 
|  | 12 | +- 📊 **Event Streaming**: See real-time turn/step events as the agent processes your queries | 
|  | 13 | +- 🎯 **Server-Side Tools**: Demonstrates file_search and other server-side tool execution | 
|  | 14 | +- 💬 **Interactive REPL**: Chat-style interface for easy exploration | 
|  | 15 | + | 
|  | 16 | +### Prerequisites | 
|  | 17 | + | 
|  | 18 | +1. Start a Llama Stack server with OpenAI provider: | 
|  | 19 | +   ```bash | 
|  | 20 | +   cd ~/local/llama-stack | 
|  | 21 | +   source ../stack-venv/bin/activate | 
|  | 22 | +   export OPENAI_API_KEY=<your-key> | 
|  | 23 | +   llama stack run ci-tests --port 8321 | 
|  | 24 | +   ``` | 
|  | 25 | + | 
|  | 26 | +2. Install the client (from repository root): | 
|  | 27 | +   ```bash | 
|  | 28 | +   cd /Users/ashwin/local/new-stainless/llama-stack-client-python | 
|  | 29 | +   uv sync | 
|  | 30 | +   ``` | 
|  | 31 | + | 
|  | 32 | +### Usage | 
|  | 33 | + | 
|  | 34 | +Basic usage (uses defaults: openai/gpt-4o, localhost:8321): | 
|  | 35 | +```bash | 
|  | 36 | +cd examples | 
|  | 37 | +uv run python interactive_agent_cli.py | 
|  | 38 | +``` | 
|  | 39 | + | 
|  | 40 | +With custom options: | 
|  | 41 | +```bash | 
|  | 42 | +uv run python interactive_agent_cli.py --model openai/gpt-4o-mini --base-url http://localhost:8321 | 
|  | 43 | +``` | 
|  | 44 | + | 
|  | 45 | +### Example Session | 
|  | 46 | + | 
|  | 47 | +``` | 
|  | 48 | +╔══════════════════════════════════════════════════════════════╗ | 
|  | 49 | +║                                                              ║ | 
|  | 50 | +║        🤖  Interactive Agent Explorer  🔍                    ║ | 
|  | 51 | +║                                                              ║ | 
|  | 52 | +║  Explore agent turn/step events with server-side tools      ║ | 
|  | 53 | +║                                                              ║ | 
|  | 54 | +╚══════════════════════════════════════════════════════════════╝ | 
|  | 55 | +
 | 
|  | 56 | +🔧 Configuration: | 
|  | 57 | +  Model: openai/gpt-4o | 
|  | 58 | +  Server: http://localhost:8321 | 
|  | 59 | +
 | 
|  | 60 | +🔌 Connecting to server... | 
|  | 61 | +  ✓ Connected | 
|  | 62 | +
 | 
|  | 63 | +📚 Setting up knowledge base... | 
|  | 64 | +  Indexing documents....... ✓ | 
|  | 65 | +  Vector store ID: vs_abc123 | 
|  | 66 | +
 | 
|  | 67 | +🤖 Creating agent with tools... | 
|  | 68 | +  ✓ Agent ready | 
|  | 69 | +
 | 
|  | 70 | +💬 Type your questions (or 'quit' to exit, 'help' for suggestions) | 
|  | 71 | +────────────────────────────────────────────────────────────── | 
|  | 72 | +
 | 
|  | 73 | +🧑 You: What is Project Phoenix? | 
|  | 74 | +
 | 
|  | 75 | +🤖 Assistant: | 
|  | 76 | +
 | 
|  | 77 | +  ┌─── Turn turn_abc123 started ───┐ | 
|  | 78 | +  │                                 │ | 
|  | 79 | +  │  🧠 Inference Step 0 started    │ | 
|  | 80 | +  │  🔍 Tool Execution Step 1       │ | 
|  | 81 | +  │     Tool: knowledge_search      │ | 
|  | 82 | +  │     Status: server_side         │ | 
|  | 83 | +  │  🧠 Inference Step 2            │ | 
|  | 84 | +  │  ✓ Response: Project Phoenix... │ | 
|  | 85 | +  │                                 │ | 
|  | 86 | +  └─── Turn completed ──────────────┘ | 
|  | 87 | +
 | 
|  | 88 | +Project Phoenix is a next-generation distributed systems platform launched in 2024... | 
|  | 89 | +``` | 
|  | 90 | + | 
|  | 91 | +### What You'll See | 
|  | 92 | + | 
|  | 93 | +The tool uses `AgentEventLogger` to display: | 
|  | 94 | +- **Turn lifecycle**: TurnStarted → TurnCompleted | 
|  | 95 | +- **Inference steps**: When the model is thinking/generating text | 
|  | 96 | +- **Tool execution steps**: When server-side tools (like file_search) are running | 
|  | 97 | +- **Step metadata**: Whether tools are server-side or client-side | 
|  | 98 | +- **Real-time streaming**: Text appears as it's generated | 
|  | 99 | + | 
|  | 100 | +### Sample Questions | 
|  | 101 | + | 
|  | 102 | +Type `help` in the interactive session to see suggested questions, or try: | 
|  | 103 | +- "What is Project Phoenix?" | 
|  | 104 | +- "Who is the lead architect?" | 
|  | 105 | +- "What ports does the system use?" | 
|  | 106 | +- "How long do JWT tokens last?" | 
|  | 107 | +- "Where is the production environment deployed?" | 
|  | 108 | + | 
|  | 109 | +### Exit | 
|  | 110 | + | 
|  | 111 | +Type `quit`, `exit`, `q`, or press `Ctrl+C` to exit. | 
0 commit comments