write text → get music
This repo lets you compose multi-instrument music (as MIDI) by writing a chain of plain-text prompts.
Drop .txt prompts into prompts/user/. The system:
- asks GPT-5 (OpenAI Responses API) to emit a strict, machine-readable music bundle,
- registers those instructions on the Decentralised Creative Network (DCN) as Performative Transactions (PTs),
- executes the PTs to get note arrays,
- stitches all units into one piece (
composition_suite.json), and - (optionally) exports a .mid using a small Node tool.
You compose in text; the pipeline handles schema, DCN execution, validation, scheduling, and stitching.
- DCN (Decentralised Creative Network) executes creative procedures (PTs) like “generate a note stream.”
- A Performative Transaction (PT) has dimensions (
time,duration,pitch,velocity,numerator,denominator), each a list of integer ops (add,subtract,mul,div). - When you execute a PT with seeds and length
N, DCN returns concrete arrays. - Here, GPT-5 writes PT bundles (one per instrument per bar). We post them to DCN and execute to obtain the actual notes.
TL;DR: You describe the music → GPT-5 writes the recipe → DCN cooks it → you get JSON (and optionally MIDI).
-
Python 3.10+
-
Python deps:
pip install -r requirements.txt
-
OpenAI access to GPT-5
-
DCN SDK importable as
dcn(install per DCN docs) -
(Optional, for MIDI export) Node 18+ with
jzzandjzz-midi-smf(installed vianpm installin this repo)
Create secrets.py in the project root:
# secrets.py
OPENAI_API_KEY = "sk-..." # your OpenAI key(Alternatively, set OPENAI_API_KEY in your environment.)
For DCN auth, the pipeline will use PRIVATE_KEY from the environment if present; otherwise it creates a temporary account for the session.
compose_suite.py # discovers prompts, builds each unit, stitches final piece
pt_generate.py # generates ONE unit (one prompt → 1..N bars), returns data to compose_suite
pt_prompts.py # loads system prompt and reads .txt prompt files; parses METER from text
pt_config.py # instruments meta & helpers (ranges, display info)
dcn_client.py # DCN HTTP + SDK wrapper: auth, post_feature, execute_pt
tools/pt2midi.js # (Node) PT-JSON → .mid writer using jzz + jzz-midi-smf
prompts/
system/global.txt # global system prompt (composer persona + hard rules)
user/*.txt # your prompts (filename order = suite order)
runs/ # auto-created per full suite run with all artifacts
Meter is specified in your prompt text via a simple directive at the top:
METER: 3/4
Supported mappings on a 1/16 grid (ticks per bar):
3/4→ 12 ticks4/4→ 16 ticks2/4→ 8 ticks1/4→ 4 ticks
If you omit METER:, the unit defaults to 3/4 (12 ticks).
Advanced: you can also force BAR_TICKS: <int>. The system appends exact hard MIDI ranges and a meter reminder to each user prompt automatically.
Ordering: files in prompts/user/ are processed lexicographically; use numeric prefixes (e.g., 001_intro.txt, 010_clouds.txt, …).
Example prompt
METER: 3/4
TITLE
Airy chorale — six parts, soft dynamics.
INSTRUMENTS (EXACT)
[alto_flute, violin, bass_clarinet, trumpet, cello, double_bass].
CONSTRAINTS
Monophony per instrument; time uses only add {1,2,3,4}; durations fit gaps; pitch add/sub only; no overlaps.
GOAL
A luminous, stepwise texture with occasional small leaps and corrective motion. Close but non-triadic vertical colors.
- Write prompts in
prompts/user/(includeMETER: ...in the text when you need a meter change). - Generate the suite:
python compose_suite.pyOutputs (per run) in runs/<timestamp>_suite/:
composition_suite.json— the stitched, multi-track PT output (for visualisers/MIDI export)schedule.json— unit start offsets & meterspt_journal.json— compact log of posted/executed PTsprompts_and_summaries.txt— the rendered prompts and computed summariesmanifest.json— filenames & totals
(Optionally mirrored elsewhere by your own scripts.)
This repo ships with a small Node tool that converts the stitched PT JSON to a Standard MIDI File.
npm installWhen you run:
python compose_suite.pythe pipeline writes:
runs/<ts>_suite/composition_suite.jsonruns/<ts>_suite/composition_suite.mid← MIDI (auto)- plus the usual logs and summaries
If Node isn’t installed or dependencies are missing, the run still completes; only the MIDI step is skipped with a warning.
You can create a .mid for any earlier suite folder:
node tools/pt2midi.js runs/<ts>_suite/composition_suite.json runs/<ts>_suite/composition_suite.midLatest run quick command (bash):
latest="$(ls -dt runs/*_suite | head -1)"
node tools/pt2midi.js "$latest/composition_suite.json" "$latest/composition_suite.mid"Batch all runs (bash):
for d in runs/*_suite; do
[ -f "$d/composition_suite.json" ] && \
node tools/pt2midi.js "$d/composition_suite.json" "$d/composition_suite.mid"
doneThe converter uses the per-instrument GM programs/banks from
instrument_meta(if present). It also embeds time-signature marks from thenumerator/denominatorstreams.
You don’t have to render the whole suite every time.
Use the ONLY env var to filter prompts/user/* by filename (glob patterns supported):
# single file
ONLY=001.txt python compose_suite.py
# prefix match
ONLY=010* python compose_suite.py
# any 00x file
ONLY=00?.txt python compose_suite.pyFiles are still processed in lexicographic order after filtering.
Every run checkpoints into runs/<timestamp>_suite/. If a render stops part-way, you can continue from where it left off:
python compose_suite.py --resume runs/20251020-195317_suiteWhat resume does:
- Reuses the saved template order and already-finished units
- Preserves the rolling PT context so later bars stay musically consistent
- Keeps writing into the same suite folder
You’ll also see periodic partial outputs for quick inspection:
composition_suite.partial.jsonschedule.partial.json
Tune how often those are emitted with:
# write partial stitched outputs every K units (default 5)
python compose_suite.py --checkpoint-every 3
# or via env var
CHECKPOINT_EVERY=3 python compose_suite.pyTip: to resume the most recent suite quickly:
latest="$(ls -dt runs/*_suite | head -1)"
python compose_suite.py --resume "$latest"After each unit, we capture the model’s raw PT JSON (minified) and feed the entire accumulated history of those bundles into the next call as a special reference system message.
- The model sees exactly what it previously emitted (bars/sections, features, run_plan, seeds, etc.).
- We instruct it not to echo those prior objects; it must return only one JSON object for the current unit.
- Literal reuse becomes possible: when you ask to “reuse the Background Loop from the reference bundle that has N=… (transpose +2, rename),” the model can copy the prior structure.
- We no longer pass earlier user prompts or summaries into the model’s context. Those remain in
runs/<ts>_suite/prompts_and_summaries.txtfor human reading.
By default, each unit call includes a system “reference” block containing prior model JSON bundles (not your text prompts). You can now control how many of those previous bundles are included, and cap their total size.
Flags / env vars
-
--context-last NorCONTEXT_LAST=Nall(default) → include all prior bundles (subject to budget)1→ include only the last prior bundle0→ include none (each unit composes in isolation)- Any integer
N≥0is accepted
-
--context-budget CHARSorCONTEXT_BUDGET_CHARS=CHARS- Max characters from prior bundles to embed (default 15000).
- If the budget is tight, fewer than
Nmay be included.
Behavior
- The generator packs the latest bundles first until the budget is reached.
- Only the model’s JSON outputs are included, never earlier user prompts.
- Works with resume: you can change these flags mid-run; inclusion is recomputed from the stored bundle list.
When to use what
--context-last 1— great for sections that should literally repeat or closely reference the immediately preceding unit.--context-last all— best for long-range continuity and thematic recall across many units.--context-last 0— useful for independent sections or A/B experiments.
Examples
# Only the last prior bundle (tight repeat control)
python compose_suite.py --context-last 1
# All prior bundles, but allow a bigger budget
python compose_suite.py --context-last all --context-budget 30000
# No prior context (fresh start every unit)
python compose_suite.py --context-last 0
# Using env vars instead of flags
CONTEXT_LAST=1 CONTEXT_BUDGET_CHARS=20000 python compose_suite.pyTip: If you see the model missing a repeat because the reference didn’t fit, raise
--context-budgetor reduce--context-lastso the most recent bundle always fits.
- Allowed ops:
add,subtract,mul,div(exact spelling). - Duration: live values must be in
{1,2,3,4}; automatically capped to next onset and bar end for safety. - Pitch:
add/subtractonly; kept inside hard MIDI ranges (per instrument). - Meter: seeds set from your
METER:directive; meter dims use constantadd 0. - Monophony: enforced via time/duration rules and capping.
If a bundle violates constraints, the run raises with a clear error pointing to the offending feature/dimension.
ModuleNotFoundError: dcn→ Install the DCN SDK soimport dcnworks.- Model output isn’t valid JSON → Tighten the prompt (“return the bundle(s) only; one JSON object; no prose”).
- Notes overlap or spill → The runner caps durations to the next onset/bar end, but keep your durations ≤ smallest time step to remain musical.
- MIDI export fails → Ensure Node 18+ and
npm installwere executed; check the input path tocomposition_suite.json.
MIT