Skip to content

Conversation

@kompfner
Copy link
Contributor

@kompfner kompfner commented Nov 21, 2025

Here's the associated Pipecat code: pipecat-ai/pipecat#3113 (which needs to land and be published before this PR can land)

Merge after pipecat-ai 0.0.96 release.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was spat out by agentcore cli


The pipeline looks like a standard Pipecat bot pipeline, but with an AgentCore agent taking the place of an LLM. User audio gets converted to text and sent to the AgentCore agent, which will try to do work on the user's behalf. Responses from the agent are streamed back and spoken. User and agent messages are recorded in a context object.

Note that unlike an LLM service found in a traditional Pipecat bot pipeline, the AgentCore agent by default does not receive the full conversation context after each user turn, only the last user message. It is up to the AgentCore agent to decide whether and how to manage its own memory (AgentCore includes memory capabilities).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth an explanation here as to why you'd want to do this. I think the most promising use-case is a secondary pipeline for longer-running tasks that "reports back" its progress to (and can be queried about its progress by) the main voice pipeline, something @aconchillo is still working on.

That, or maybe a more realistic agent example.

Copy link
Contributor

@markbackman markbackman Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. This would make more sense in the context of an async agent that completes a task, secondary to a voice agent. Since we don't have any examples along those lines yet, I'm not sure how to introduce that. For now, perhaps what you have is fine?

@kompfner kompfner changed the title AWSBedrockAgentCoreProcessor example [DO NOT MERGE] AWSBedrockAgentCoreProcessor example Nov 21, 2025
@kompfner kompfner marked this pull request as ready for review November 21, 2025 21:15
…output, as it's expected (it should behave like LLM output)
…ion of the CLI for things to work but the newest one isn't compatible with other dependencies, so I arbitrarily picked 1.25.

app = BedrockAgentCoreApp()

MEMORY_ID = os.getenv("BEDROCK_AGENTCORE_MEMORY_ID")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't mentioned anywhere. What does it refer to?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to run without this. Maybe AgentCore sets this env var?

)
from bedrock_agentcore.runtime import BedrockAgentCoreApp
from strands import Agent
from strands_tools.code_interpreter import AgentCoreCodeInterpreter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strands is not mentioned either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm adding Strands to the README (and removing OpenAI).

Copy link
Contributor

@markbackman markbackman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once I got my IAM permissions all sorted out, this works really well!

@markbackman markbackman changed the title [DO NOT MERGE] AWSBedrockAgentCoreProcessor example AWSBedrockAgentCoreProcessor example Nov 25, 2025
@aconchillo
Copy link
Contributor

LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants