-
Notifications
You must be signed in to change notification settings - Fork 67
AWSBedrockAgentCoreProcessor example
#120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was spat out by agentcore cli
|
|
||
| The pipeline looks like a standard Pipecat bot pipeline, but with an AgentCore agent taking the place of an LLM. User audio gets converted to text and sent to the AgentCore agent, which will try to do work on the user's behalf. Responses from the agent are streamed back and spoken. User and agent messages are recorded in a context object. | ||
|
|
||
| Note that unlike an LLM service found in a traditional Pipecat bot pipeline, the AgentCore agent by default does not receive the full conversation context after each user turn, only the last user message. It is up to the AgentCore agent to decide whether and how to manage its own memory (AgentCore includes memory capabilities). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth an explanation here as to why you'd want to do this. I think the most promising use-case is a secondary pipeline for longer-running tasks that "reports back" its progress to (and can be queried about its progress by) the main voice pipeline, something @aconchillo is still working on.
That, or maybe a more realistic agent example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. This would make more sense in the context of an async agent that completes a task, secondary to a voice agent. Since we don't have any examples along those lines yet, I'm not sure how to introduce that. For now, perhaps what you have is fine?
AWSBedrockAgentCoreProcessor example
292e3a9 to
1a0023d
Compare
1a0023d to
39d5932
Compare
…output, as it's expected (it should behave like LLM output)
0b9988b to
582fc26
Compare
…ion of the CLI for things to work but the newest one isn't compatible with other dependencies, so I arbitrarily picked 1.25.
|
|
||
| app = BedrockAgentCoreApp() | ||
|
|
||
| MEMORY_ID = os.getenv("BEDROCK_AGENTCORE_MEMORY_ID") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't mentioned anywhere. What does it refer to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to run without this. Maybe AgentCore sets this env var?
| ) | ||
| from bedrock_agentcore.runtime import BedrockAgentCoreApp | ||
| from strands import Agent | ||
| from strands_tools.code_interpreter import AgentCoreCodeInterpreter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strands is not mentioned either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm adding Strands to the README (and removing OpenAI).
markbackman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once I got my IAM permissions all sorted out, this works really well!
AWSBedrockAgentCoreProcessor exampleAWSBedrockAgentCoreProcessor example
|
LGTM! |
Here's the associated Pipecat code: pipecat-ai/pipecat#3113 (which needs to land and be published before this PR can land)
Merge after pipecat-ai 0.0.96 release.