-
Notifications
You must be signed in to change notification settings - Fork 68
AWSBedrockAgentCoreProcessor example
#120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
39d5932
582fc26
505afbd
f462e1d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| .bedrock_agentcore* |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,132 @@ | ||
| # Amazon Bedrock AgentCore Example | ||
|
|
||
| This example demonstrates how to integrate an AgentCore-hosted agent into a Pipecat pipeline. | ||
|
|
||
| The pipeline looks like a standard Pipecat bot pipeline, but with an AgentCore agent taking the place of an LLM. User audio gets converted to text and sent to the AgentCore agent, which will try to do work on the user's behalf. Responses from the agent are streamed back and spoken. User and agent messages are recorded in a context object. | ||
|
|
||
| Note that unlike an LLM service found in a traditional Pipecat bot pipeline, the AgentCore agent by default does not receive the full conversation context after each user turn, only the last user message. It is up to the AgentCore agent to decide whether and how to manage its own memory (AgentCore includes memory capabilities). | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - Accounts with: | ||
| - AWS (with access to Bedrock AgentCore and Claude 3.7 Sonnet model) | ||
| - Deepgram | ||
| - Cartesia | ||
| - Daily (optional) | ||
| - Python 3.10 or higher | ||
| - `uv` package manager | ||
|
|
||
| ## Setup | ||
|
|
||
| ### Install Dependencies | ||
|
|
||
| Install dependencies needed to run the Pipecat bot as well as the AgentCore CLI. | ||
|
|
||
| ```bash | ||
| uv sync | ||
| ``` | ||
|
|
||
| This installs: | ||
|
|
||
| - **Pipecat** - The voice AI pipeline framework | ||
| - **Strands** - AWS's agentic framework (used in the code agent) | ||
| - **Bedrock AgentCore Starter Toolkit** - CLI tools for deploying agents | ||
| - **Strands Tools** - Pre-built tools like the Code Interpreter | ||
|
|
||
| ### Set Environment Variables | ||
|
|
||
| Copy `env.example` to `.env` and fill in the values in `.env`. | ||
|
|
||
| ```bash | ||
| cp env.example .env | ||
| ``` | ||
|
|
||
| **Do not worry** about `AWS_AGENT_ARN` yet. You'll obtain an agent ARN as part of the following steps, when you deploy your agent to AgentCore Runtime. | ||
|
|
||
| ## Deploying Your Agent to AgentCore Runtime | ||
|
|
||
| Before you can run the Pipecat bot file, you need to deploy an agent to AgentCore Runtime. This example includes two agents: | ||
|
|
||
| - **Dummy agent** (`dummy_agent.py`) - Reports progress while pretending to carry out a relatively long-running task | ||
| - **Code agent** (`code_agent.py`) - An algorithmic-problem-solving agent built with Strands that can write and execute Python code to answer questions | ||
|
|
||
| ### About the Code Agent | ||
|
|
||
| The code agent demonstrates how to use **Strands** (AWS's agentic framework) within AgentCore: | ||
|
|
||
| - Uses the **Strands Agent** with Claude 3.7 Sonnet model | ||
| - Includes the **AgentCore Code Interpreter** tool for executing Python code | ||
| - Streams responses in real-time for a conversational experience | ||
| - Designed for voice interaction with TTS-friendly output | ||
|
|
||
| Below we'll do a barebones walkthrough of deploying an agent to AgentCore Runtime. For a comprehensive guide to getting started with Amazon Bedrock AgentCore, including detailed setup instructions, see the [Amazon Bedrock AgentCore Developer Guide](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html). | ||
|
|
||
| ### IAM Setup | ||
|
|
||
| Configure your IAM user with the necessary policies for AgentCore usage. Start with these: | ||
|
|
||
| - `BedrockAgentCoreFullAccess` | ||
| - A new policy (maybe named `BedrockAgentCoreCLI`) configured [like this](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-permissions.html#runtime-permissions-starter-toolkit) | ||
|
|
||
| You can also choose to specify more granular permissions; see [Amazon Bedrock AgentCore docs](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-permissions.html) for more information. | ||
|
|
||
| ### Environment Setup | ||
|
|
||
| To simplify the remaining AgentCore deployment steps in this README, it's a good idea to export some AWS-specific environment variables: | ||
|
|
||
| ```bash | ||
| export AWS_SECRET_ACCESS_KEY=... | ||
| export AWS_ACCESS_KEY_ID=... | ||
| export AWS_REGION=... | ||
| ``` | ||
|
|
||
| ### Agent Configuration | ||
|
|
||
| Create a new AgentCore configuration. | ||
|
|
||
| ```bash | ||
| cd agents | ||
| uv run agentcore configure -e code_agent.py | ||
| ``` | ||
|
|
||
| Follow the interactive prompts to complete the configuration. It's OK to just accept all defaults. | ||
|
|
||
| ### Agent Deployment | ||
|
|
||
| Deploy your agent to AgentCore Runtime. | ||
|
|
||
| ```bash | ||
| uv run agentcore launch | ||
| ``` | ||
|
|
||
| This step will spit out the agent ARN. Copy it and paste it in your `.env` file as your `AWS_AGENT_ARN` value. | ||
|
|
||
| The above is also the command you need to run after you've updated your agent code and need to redeploy. | ||
|
|
||
| ### Validation | ||
|
|
||
| Try running your agent on AgentCore Runtime. | ||
|
|
||
| ```bash | ||
| uv run agentcore invoke '{"prompt": "What is the meaning of life?"}' | ||
| ``` | ||
|
|
||
| ### Obtaining Your Agent ARN at Any Point | ||
|
|
||
| Your agent status will include its ARN. | ||
|
|
||
| ```bash | ||
| uv run agentcore status | ||
| ``` | ||
|
|
||
| ## Running The Example | ||
|
|
||
| With your agent deployed to AgentCore, you can now run the example. | ||
|
|
||
| ```bash | ||
| # Using SmallWebRTC transport | ||
| uv run python bot.py | ||
|
|
||
| # Using Daily transport | ||
| uv run python bot.py -t daily -d | ||
| ``` | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this was spat out by |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,69 @@ | ||
| # Build artifacts | ||
| build/ | ||
| dist/ | ||
| *.egg-info/ | ||
| *.egg | ||
|
|
||
| # Python cache | ||
| __pycache__/ | ||
| __pycache__* | ||
| *.py[cod] | ||
| *$py.class | ||
| *.so | ||
| .Python | ||
|
|
||
| # Virtual environments | ||
| .venv/ | ||
| .env | ||
| venv/ | ||
| env/ | ||
| ENV/ | ||
|
|
||
| # Testing | ||
| .pytest_cache/ | ||
| .coverage | ||
| .coverage* | ||
| htmlcov/ | ||
| .tox/ | ||
| *.cover | ||
| .hypothesis/ | ||
| .mypy_cache/ | ||
| .ruff_cache/ | ||
|
|
||
| # Development | ||
| *.log | ||
| *.bak | ||
| *.swp | ||
| *.swo | ||
| *~ | ||
| .DS_Store | ||
|
|
||
| # IDEs | ||
| .vscode/ | ||
| .idea/ | ||
|
|
||
| # Version control | ||
| .git/ | ||
| .gitignore | ||
| .gitattributes | ||
|
|
||
| # Documentation | ||
| docs/ | ||
| *.md | ||
| !README.md | ||
|
|
||
| # CI/CD | ||
| .github/ | ||
| .gitlab-ci.yml | ||
| .travis.yml | ||
|
|
||
| # Project specific | ||
| tests/ | ||
|
|
||
| # Bedrock AgentCore specific - keep config but exclude runtime files | ||
| .bedrock_agentcore.yaml | ||
| .dockerignore | ||
| .bedrock_agentcore/ | ||
|
|
||
| # Keep wheelhouse for offline installations | ||
| # wheelhouse/ |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,56 @@ | ||
| import os | ||
|
|
||
| from bedrock_agentcore.memory.integrations.strands.config import ( | ||
| AgentCoreMemoryConfig, | ||
| RetrievalConfig, | ||
| ) | ||
| from bedrock_agentcore.memory.integrations.strands.session_manager import ( | ||
| AgentCoreMemorySessionManager, | ||
| ) | ||
| from bedrock_agentcore.runtime import BedrockAgentCoreApp | ||
| from strands import Agent | ||
| from strands_tools.code_interpreter import AgentCoreCodeInterpreter | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Strands is not mentioned either.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm adding Strands to the README (and removing OpenAI). |
||
|
|
||
| app = BedrockAgentCoreApp() | ||
|
|
||
| MEMORY_ID = os.getenv("BEDROCK_AGENTCORE_MEMORY_ID") | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This isn't mentioned anywhere. What does it refer to?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems to run without this. Maybe AgentCore sets this env var? |
||
| REGION = os.getenv("AWS_REGION") | ||
| MODEL_ID = "us.anthropic.claude-3-7-sonnet-20250219-v1:0" | ||
|
|
||
|
|
||
| @app.entrypoint | ||
| async def invoke(payload, context): | ||
| actor_id = "quickstart-user" | ||
|
|
||
| # Get runtime session ID for isolation | ||
| session_id = getattr(context, "session_id", None) | ||
|
|
||
| # Create Code Interpreter with runtime session binding | ||
| code_interpreter = AgentCoreCodeInterpreter(region=REGION, auto_create=True) | ||
|
|
||
| agent = Agent( | ||
| model=MODEL_ID, | ||
| system_prompt="""You are a helpful assistant specializing in solving algorithmic problems with code. | ||
|
|
||
| Your output will be spoken aloud by text-to-speech, so use plain language without special formatting or characters (for instance, **AVOID NUMBERED OR BULLETED LISTS**). | ||
|
|
||
| Think aloud as you work: explain your approach before coding, describe what you're doing as you write code, and analyze the results after execution. Narrate your reasoning throughout to make your process transparent and educational. | ||
|
|
||
| Also, try to be as succinct as possible. Avoid unnecessary verbosity. | ||
| """, | ||
| tools=[code_interpreter.code_interpreter], | ||
| ) | ||
|
|
||
| # Stream the response | ||
| async for event in agent.stream_async(payload.get("prompt", "")): | ||
| if "data" in event: | ||
| chunk = event["data"] | ||
| # Yield chunks as they arrive for real-time streaming | ||
| yield {"response": chunk} | ||
| elif "result" in event: | ||
| # Final result with stop reason | ||
| yield {"done": True} | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| app.run() | ||
kompfner marked this conversation as resolved.
Show resolved
Hide resolved
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,29 @@ | ||
| import asyncio | ||
|
|
||
| from bedrock_agentcore import BedrockAgentCoreApp | ||
|
|
||
| app = BedrockAgentCoreApp() | ||
|
|
||
|
|
||
| @app.entrypoint | ||
| async def invoke(payload, context): | ||
| prompt = payload.get("prompt") | ||
|
|
||
| yield {"response": f"Handling your request: {prompt}."} | ||
|
|
||
| # Simulate some processing | ||
| await asyncio.sleep(5) | ||
|
|
||
| yield {"response": f" Still working on it..."} | ||
|
|
||
| # Simulate more processing | ||
| await asyncio.sleep(5) | ||
|
|
||
| yield {"response": f" Finished! The answer, as always, is 'who knows?'."} | ||
|
|
||
| # Remove yields above and uncomment the below to test non-streamed response | ||
| # return {"response": f"Finished! The answer, as always, is 'who knows?'."} | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| app.run() |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| bedrock-agentcore | ||
| strands-agents | ||
| strands-agents-tools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth an explanation here as to why you'd want to do this. I think the most promising use-case is a secondary pipeline for longer-running tasks that "reports back" its progress to (and can be queried about its progress by) the main voice pipeline, something @aconchillo is still working on.
That, or maybe a more realistic agent example.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. This would make more sense in the context of an async agent that completes a task, secondary to a voice agent. Since we don't have any examples along those lines yet, I'm not sure how to introduce that. For now, perhaps what you have is fine?