This directory is the publishable workshop package.
It intentionally does not contain the main workshop script collection anymore.
The real scripts live in the separate workshop repo, while this package keeps a
few tiny local examples and a scratch script.ts for experimentation.
sdk.tsre-exports the shared runtime/client SDK fromapps/events-contract/src/sdk.tssdk.tsalso exports lightweight network test helpers for workshop e2e testscontract.tsre-exports the shared contract fromapps/events-contract/src/index.tscli.tsruns workshop scripts from the current working directoryexamples/contains a few tiny runnable scripts for local messing around inside this repo
Local development:
cd ai-engineer-workshop
pnpm install
pnpm w --help
pnpm build
pnpm test:e2eIf you want to experiment from inside this repo, put scripts in:
ai-engineer-workshop/script.tsfor a single scratch fileai-engineer-workshop/examples/...for a few longer-lived examples
Those example files can import exactly the same way as the separate workshop repo:
import { createEventsClient, normalizePathPrefix, runWorkshopMain } from "ai-engineer-workshop";For networked tests, the SDK also exports helpers that default to:
BASE_URL=https://events.iterate.comPROJECT_SLUG=public
createEventsClient() now returns the raw oRPC client, so append calls use the
contract shape directly:
await client.append({
path: streamPath,
event: {
type: "hello-world",
payload: { message: "hello world" },
},
});Processors use the shared defineProcessor() helper from apps/events-contract/src/sdk.ts:
const processor = defineProcessor(() => ({
slug: "hello-world",
initialState: { seen: 0 },
reduce: ({ event, state }) => (event.type === "hello-world" ? { seen: state.seen + 1 } : state),
afterAppend: async ({ append, event, state }) => {
if (event.type !== "hello-world" || state.seen !== 1) return;
await append({
event: { type: "hello-world-seen", payload: { sourceOffset: event.offset } },
});
},
}));Processor append() now always takes an options object:
await append({ event: { type: "pong", payload: {} } });
await append({ path: "./child", event: { type: "child-ping", payload: {} } });
await append({ path: "../", event: { type: "notify-parent", payload: {} } });For multi-stream workers, PullSubscriptionPatternProcessorRuntime watches /
for child-stream-created events, keeps discovery live, and spins up one
processor runtime per matching stream path, e.g. /team/* or /team/**/*.
That works because this directory is itself the ai-engineer-workshop package root, so package self-reference resolves correctly from files inside it.
Examples are discoverable via:
cd ai-engineer-workshop
pnpm w --help
pnpm w run --script examples/01-hello-world/append-hello-world.ts
pnpm w run --script examples/03-pattern-processor/prove-jonas-ping-pong.ts
pnpm w run --script examples/04-llm-codemode/run-llm-codemode-loop.ts
pnpm w run --script examples/05-slack-codemode/run-slack-codemode-loop.ts
pnpm w run --script examples/06-slack-composition/run-slack-composition.ts
pnpm w run --script examples/07-slack-tools/run-slack-tools.tsThe deployed processor example lives in:
ai-engineer-workshop/examples/deployed-processor
Pattern-processor example:
examples/03-pattern-processor/jonas-ping-pong-processor.tswatches"/jonas/**/*"and replies to everypingwith apong.examples/03-pattern-processor/prove-jonas-ping-pong.tsruns against a real localapps/eventsworker and asserts only matching/jonas/...streams get a derivedpong.
LLM + codemode example:
examples/04-llm-codemode/coding-agent-system-prompt.tsbuilds the coding-agent prompt. It tells the model its agent path, gives a tiny explanation of the events system, and includes concretefetch()examples for reading streams, appending events, and sendingllm-input-addedto another agent.examples/04-llm-codemode/agent.tsruns an OpenAI Responses API loop fromllm-input-added, streams every OpenAI event back into the stream, cancels and restarts on newer input, and emitscodemode-block-addedwhen the assistant output containstsblocks. Completion is recorded withllm-request-completed.examples/04-llm-codemode/agent-types.tsholds the agent event contracts and the event-to-prompt mirroring helpers.examples/04-llm-codemode/codemode.tsis completely independent from the agent loop and only knows how to executecodemode-block-added. It writes.codemode/<block-count>/code.ts, compiles that file withtsc, runs the emitted JS, then appendscodemode-result-added.examples/04-llm-codemode/codemode-types.tsholds the codemode event contracts.examples/04-llm-codemode/run-llm-codemode-loop.tsstarts both processors against the same stream.e2e/vitest/codemode-agent.test.tsis the proper Vitest network proof. It covers the cancel-and-restart loop and a second case where one agent sendsllm-input-addedto another agent over the events API.
Slack codemode example:
examples/05-slack-codemode/agent.tsis the Slack-focused variant. It still uses the same LLM loop shape, but it mirrorsinvalid-event-appendedinto YAML prompt input and runs plaingpt-5.4with reasoning enabled.examples/05-slack-codemode/coding-agent-system-prompt.tstells the model to respond to Slack by emitting onetsblock that POSTs toresponse_url.examples/05-slack-codemode/codemode.tskeeps the codemode runner independent and writes artifacts under.codemode/<stream-path>/<block-count>/.examples/05-slack-codemode/run-slack-codemode-loop.tsstarts the Slack variant and prints a raw webhook example you can POST straight into the stream.e2e/vitest/slack-codemode-agent.test.tsproves the full flow against the deployed events service: raw Slack JSON becomesinvalid-event-appended, the agent sees a YAML prompt, and two turns on the same stream produce a remembered Slack reply.
Workshop kernel examples:
examples/06-slack-composition/slack-input.ts,examples/06-slack-composition/agent.ts, andexamples/06-slack-composition/codemode.tsare the small teaching version of the system: one processor normalizes raw Slack JSON, one turns stream events into LLM input and code blocks, and one runs those blocks.e2e/vitest/slack-composition.test.tsproves the minimal chain: raw Slack webhook -> normalized event -> LLM input -> Slack reply.examples/07-slack-tools/codemode.tsis the follow-on example where blocks exportdefault async function(ctx)and a newcodemode-tool-addedevent can extendctx.*.examples/07-slack-tools/run-slack-tools.tsregisters a tinyctx.replyToSlack(...)tool and also shows the real@slack/web-apipackage as the next step forctx.slackApi.e2e/vitest/slack-tools.test.tsproves both pieces separately: the tool works when called directly from codemode, and the agent can still keep context across two Slack turns on the same stream.
Published preview packages are built directly from this folder via pkg.pr.new.
The separate scripts repo lives at:
a separate local checkout of ai-engineer-workshop
That repo can either:
- depend on a
pkg.pr.newpreview of this package - or override
ai-engineer-workshopto a local link pointing at this folder during development