A collection of agent-based models exploring cooperation, altruism, and eco-evolutionary dynamics.
The current website-ready evolved-cooperation examples in this repo are:
spatial_altruism/: a minimal spatial altruism modelcooperative_hunting/: a spatial predator-prey-grass cooperative-hunting modelspatial_prisoners_dilemma/: a spatial Prisoner's Dilemma ecology with local play, movement, reproduction, and inherited same-vs-other strategy encodingsretained_benefit/: a lattice model that tests how much cooperative benefit must be routed back toward cooperators or their copies before cooperation can spread
EvolvedCooperation is the canonical implementation repo for the website-ready
evolved-cooperation models.
The public website https://humanbehaviorpatterns.org/ is built from the
sibling human-cooperation-site repo and should describe these models
1-to-1.
Current required mapping:
spatial_altruism/in this repo <-> thespatial_altruismpage/section inhuman-cooperation-sitecooperative_hunting/in this repo <-> thecooperative_huntingpage/section inhuman-cooperation-sitespatial_prisoners_dilemma/in this repo <-> thespatial-prisoners-dilemmapage/section inhuman-cooperation-siteretained_benefit/in this repo <-> theretained-benefitpage/section inhuman-cooperation-site
Working rule:
- when a model implementation changes here, review the matching website page
- when a website explanation changes there, keep it faithful to the Python code here
This repo uses a project-local Conda environment stored at .conda/ so it travels with the workspace and VS Code can auto-select it.
- Interpreter path:
/home/doesburg/Projects/EvolvedCooperation/.conda/bin/python - VS Code setting: see
.vscode/settings.json(we setpython.defaultInterpreterPath, point VS Code at the local Conda executable, and use a repo-specific terminal profile instead of fixed-script launch entries) - Matplotlib cache/config path for VS Code runs:
.vscode/.envsetsMPLCONFIGDIR=.matplotlib - Ruff editor linting: install Ruff into the project environment with
./.conda/bin/python -m pip install ruff - Pylance note:
.vscode/settings.jsondisablesreportMissingModuleSourceso compiled Matplotlib modules do not produce false-positive import warnings in editor diagnostics
The workspace is configured so VS Code uses the repo-local .conda deterministically:
Terminal => New Terminalopensbash (.conda), which sources the normal shell startup and then activates/home/doesburg/Projects/EvolvedCooperation/.conda.Run => Run Without Debugginguses.vscode/launch.jsonplus.vscode/run_active_python.pyto inspect the active editor file.- If the active file lives inside a Python package in the repo, the helper runs it with module semantics (
runpy.run_module(...)), which matchespython -m ...from the repo root and satisfies module-only guards. - If the active file is not inside a package, the helper falls back to normal script execution (
runpy.run_path(...)). - The launch config still forces
${workspaceFolder}/.conda/bin/python, so runs do not depend on whichever interpreter VS Code happened to remember previously.
Activate the environment in a terminal when running commands manually:
source /home/doesburg/miniconda3/etc/profile.d/conda.sh
conda activate "$(pwd)/.conda"
# or run without activation using the interpreter directly:
./.conda/bin/python -m pip install -r requirements.txt
./.conda/bin/python -m spatial_altruism.altruism_modelIf you see a “bad interpreter” error, regenerate entry scripts (pip, etc.) with:
./.conda/bin/python -m pip install --upgrade --force-reinstall pip setuptools wheelThe most actively documented ecology model in the repo lives in
cooperative_hunting/.
- Main runtime:
cooperative_hunting/cooperative_hunting.py - Active parameters:
cooperative_hunting/config/cooperative_hunting_config.py - Detailed model notes and theory mapping:
cooperative_hunting/README.md
Current mechanics in that model:
- predators carry a heritable continuous hunt investment trait
hunt_investment_trait in [0,1] - hunt contribution is
predator_energy * hunt_investment_trait - predator cooperation cost is paid directly as
predator_cooperation_cost_per_unit * hunt_investment_trait - the config file now uses descriptive canonical parameter names, while legacy short aliases remain accepted for backward compatibility
- optional plasticity has been removed from the active code path, so the stored trait is the value used for hunting and cost
Browser replay preview:
Click the full-window animation preview to open the GitHub Pages replay viewer.
On 2026-04-06, the repo-level website root was turned into a multi-demo landing page.
Stepwise impact:
docs/index.htmlnow acts as a landing page that lists the available replay demos instead of embedding one specific simulation.- The cooperative-hunting browser replay now lives at
docs/cooperative-hunting/index.html. - The spatial-altruism browser replay continues to live at
docs/spatial-altruism/index.html. - The retained-benefit browser replay now also lives at
docs/retained-benefit/index.html. - README links now point directly to each demo route instead of assuming the root site always hosts one specific replay.
On 2026-04-10, the landing page gained a conceptual display that clarifies the eco-evolutionary feedback loop around learning and plasticity.
Stepwise impact:
docs/index.htmlnow includes a full-widthWhy the feedback loop matterssection beneath the demo cards.- The new display presents the loop as a four-step sequence: evolution shapes learning capacities, learning reshapes ecological structure, ecological structure reshapes selection gradients, and plasticity closes the loop.
- The landing page now also contrasts unstable and stable environments so the selection logic behind higher versus lower plasticity is visible at a glance.
docs/style.cssnow includes responsive home-page styles for that explanatory display while staying in the existing card-based visual system.
On 2026-04-06, the repo gained an explicit GitHub Pages deployment workflow for the interactive viewers.
Stepwise impact:
.github/workflows/deploy-pages.ymlnow publishes the repo-leveldocs/site on pushes tomain.docs/index.htmlnow labels the demo entry points asOpen Interactive Viewerso the viewer routes are explicit.- The public routes now include
docs/cooperative-hunting/index.html,docs/spatial-altruism/index.html, anddocs/retained-benefit/index.html; the workflow only changes how those pages are deployed. - If the repository Pages setting is not already using
GitHub Actions, switch it there so this workflow becomes the active publisher.
Project convention for this model:
- prefer editing parameters inside the config file rather than passing CLI parameter overrides
- run from repo root with
./.conda/bin/python
Minimal run example:
./.conda/bin/python -m cooperative_hunting.cooperative_huntingCurrent website examples under evolved cooperation:
-
Spatial Altruism->spatial_altruism/altruism_model.py -
Cooperative Hunting->cooperative_hunting/cooperative_hunting.py -
Spatial Prisoner's Dilemma->spatial_prisoners_dilemma/spatial_prisoners_dilemma.py -
Retained Benefit->retained_benefit/retained_benefit_model.py
Taken together, the four current website-facing evolved-cooperation modules do not support a strong claim that cooperation simply appears by default. They support a more specific claim: cooperation persists only when the update rules and ecology give cooperators some protection against immediate exploitation.
A useful near-universal formulation is: cooperation evolves when the benefits created by cooperation flow back to cooperators, or to copies of the cooperative rule, strongly enough to outweigh the private cost. In shorthand: there is no cooperation without feedback.
The new retained_benefit/ module is the repo's most direct attempt to test
that claim in a deliberately abstract form.
Shared pattern across the current models:
- There must be heritable variation in a cooperative trait or strategy.
- Interactions must be local enough that cooperative benefits are not spread completely at random.
- Some feedback mechanism must return enough of the cooperative benefit back toward cooperators.
- Reproduction and turnover must allow successful local structures to spread.
- The private cost of cooperation must stay low enough relative to the protected benefit.
The four modules implement that protection in different ways:
spatial_altruism/: local clustering plus void competition and disturbance can support altruist-selfish coexistencespatial_prisoners_dilemma/: conditional reciprocity can outperform pure defection, but it still yields coexistence rather than universal cooperationcooperative_hunting/: costly cooperation can pay when coordinated hunting creates real synergy, but the current active baseline is a supported-start threshold-synergy case rather than a pure de novo emergence testretained_benefit/: cooperation rises only when enough of the benefit it creates is routed back toward cooperators or their copies rather than leaking broadly to free-riders
So the strongest repo-level conclusion at this stage is modest:
- the minimal conditions are not one magic parameter, but a bundle of assortment, feedback, inheritance, and a favorable cost-benefit ratio
- without such protection, selfish behavior usually wins
- with it, cooperation can persist, spread, or coexist, depending on the mechanism
- these models are mechanism-level demonstrations, not a universal law of the evolution of cooperation
On 2026-04-06, the package directory for the predator-prey-grass model was renamed from predpreygrass_cooperative_hunting/ to cooperative_hunting/.
Stepwise impact:
- The Python package now lives at
cooperative_hunting/. - Module entrypoints now use
./.conda/bin/python -m cooperative_hunting...from the repo root. - Internal asset paths moved from
assets/predprey_cooperative_hunting/toassets/cooperative_hunting/. - Utility output paths now write to
cooperative_hunting/images/. - The package rename initially affected the Python/package layer; the public viewer route was renamed separately on 2026-04-07.
On 2026-04-07, the cooperative-hunting browser viewer and website slug were renamed from predator-prey-cooperative-hunting to cooperative-hunting.
Stepwise impact:
- The repo-level replay page moved from
docs/predator-prey-cooperative-hunting/index.htmltodocs/cooperative-hunting/index.html. - GitHub Pages links now point to
/cooperative-hunting/. - The
humanbehaviorpatterns.orgpage and replay paths now use/evolved-cooperation/cooperative-hunting/. - The public viewer title and landing-page label now read
Cooperative Hunting, while the descriptive copy still explains that it is a predator-prey-grass ecology.
- Description: Patch-based grid simulation of altruism vs selfishness, ported from NetLogo to Python/NumPy.
- Browser replay preview:
- Features:
- Each cell can be empty (black), selfish (green), or altruist (pink)
- Simulates benefit/cost of altruism, fitness, and generational updates
- Fully vectorized NumPy implementation for fast simulation
- Pygame UI for interactive exploration
- Matplotlib plots for population dynamics
- Grid search for parameter sweeps
- Sampled browser replay and README GIF preview
- Files:
spatial_altruism/altruism_model.py: Core simulation logicspatial_altruism/altruism_pygame_ui.py: Pygame-based interactive UIspatial_altruism/config/altruism_config.py: Active runtime configurationspatial_altruism/config/altruism_website_demo_config.py: Frozen website replay configurationspatial_altruism/images/: Plotting scripts and generated image or Plotly outputsspatial_altruism/utils/export_github_pages_demo.py: Website replay and preview GIF exporterspatial_altruism/utils/altruism_grid_search.py: Parallel grid search for extended coexistence sweepsspatial_altruism/data/grid_search_results_extended.csv: Results from the parallel grid search
- Usage:
- Run core model:
# edit spatial_altruism/config/altruism_config.py first if needed ./.conda/bin/python -m spatial_altruism.altruism_model - Run Pygame UI:
./.conda/bin/python -m spatial_altruism.altruism_pygame_ui
- Run grid search:
./.conda/bin/python -m spatial_altruism.utils.altruism_grid_search
- Regenerate website replay bundle:
./.conda/bin/python -m spatial_altruism.utils.export_github_pages_demo
- Run core model:
- Requirements:
- Python 3.8+
- numpy
- pygame (for UI)
- matplotlib (for plotting)
- torch (for surface fitting)
- Description: Spatial predator-prey ecology where predators evolve a continuous cooperation trait that affects group hunting success, payoff sharing, and private cooperation cost.
- Files:
cooperative_hunting/cooperative_hunting.py: core simulation and runtime entry pointcooperative_hunting/config/cooperative_hunting_config.py: active runtime parameterscooperative_hunting/utils/matplot_plotting.py: Matplotlib plotting helpers for baseline runscooperative_hunting/utils/sweep_dual_parameter.py: parameter sweep toolingcooperative_hunting/utils/tune_mutual_survival.py: coexistence tuning utilitiescooperative_hunting/README.md: detailed interpretation and experiment guide
- Usage:
- Edit parameters in
cooperative_hunting/config/cooperative_hunting_config.py - Run:
./.conda/bin/python -m cooperative_hunting.cooperative_hunting
- Edit parameters in
- Current status:
- uses raw inherited
hunt_investment_traitdirectly for hunt effort and cooperation cost - supports equal-split or contribution-weighted prey sharing
- includes headless analysis, pygame live rendering, and sweep/tuning helpers
- uses raw inherited
- Description: Spatial Prisoner's Dilemma ecology inspired by the
FLAMEGPU implementation from
zeyus-research/FLAMEGPU2-Prisoners-Dilemma-ABM. Agents interact locally, move when isolated, reproduce into neighboring empty cells, and inherit mutable strategies. - Relation to the other evolved-cooperation models:
- relative to
spatial_altruism/, this model adds explicit agents, energy budgets, pairwise Prisoner's Dilemma play, movement, and conditional reciprocity;spatial_altruism/is the simpler lattice model of altruist versus selfish site competition - relative to
cooperative_hunting/, this model is more game-theoretic and less ecological: it has no prey, grass, hunt coalitions, or continuous cooperation trait - taken together, the three models form a progression from local altruist-benefit selection (
spatial_altruism/), to local reciprocity and inherited response rules (spatial_prisoners_dilemma/), to ecological synergy in predator group hunting (cooperative_hunting/)
- relative to
- Files:
spatial_prisoners_dilemma/spatial_prisoners_dilemma.py: core runtime, logging, and summary outputspatial_prisoners_dilemma/config/spatial_prisoners_dilemma_config.py: active runtime parametersspatial_prisoners_dilemma/config/spatial_prisoners_dilemma_website_demo_config.py: frozen website replay configurationspatial_prisoners_dilemma/utils/matplot_plotting.py: Matplotlib plotting helpersspatial_prisoners_dilemma/utils/export_github_pages_demo.py: website replay exporterspatial_prisoners_dilemma/README.md: detailed mechanism and adaptation notes
- Usage:
- Edit parameters in
spatial_prisoners_dilemma/config/spatial_prisoners_dilemma_config.py - Run:
./.conda/bin/python -m spatial_prisoners_dilemma.spatial_prisoners_dilemma
- Regenerate website replay bundle:
./.conda/bin/python -m spatial_prisoners_dilemma.utils.export_github_pages_demo
- Edit parameters in
- Current status:
- preserves the intended spatial play, movement, reproduction, mutation, and culling logic from the external model family
- uses smaller CPU-friendly defaults instead of the original CUDA-scale population sizes
- now exports both JSON run logs for analysis and a sampled website replay bundle from a frozen public config
- now maps to the
human-cooperation-sitepage at/evolved-cooperation/spatial-prisoners-dilemma/
- Description: Abstract lattice model that tests a general cooperation condition: cooperation spreads when enough of the value it creates is routed back toward cooperators, or copies of the cooperative rule, to offset its private cost.
- Relation to the other evolved-cooperation models:
- compared with
spatial_altruism/, it replaces binary altruist-versus-selfish site types with a continuous cooperation trait and an explicit benefit-routing split - compared with
spatial_prisoners_dilemma/, it removes repeated-game memory and discrete strategy families so the feedback structure is easier to isolate - compared with
cooperative_hunting/, it removes predator-prey ecology and hunt-coalition mechanics so cooperative synergy is reduced to an abstract routing problem - it is therefore the most abstract website-facing module in the repo and the most direct test here of the claim that cooperation requires feedback
- compared with
- Files:
retained_benefit/retained_benefit_model.py: core runtime, local benefit-routing rule, and summary outputretained_benefit/retained_benefit_pygame_ui.py: live lattice viewer with cooperation and lineage modesretained_benefit/config/retained_benefit_config.py: active runtime parametersretained_benefit/config/retained_benefit_website_demo_config.py: frozen website replay configurationretained_benefit/utils/matplot_plotting.py: Matplotlib plotting helpersretained_benefit/utils/export_github_pages_demo.py: website replay exporterretained_benefit/README.md: detailed rationale and model explanation
- Usage:
- Edit parameters in
retained_benefit/config/retained_benefit_config.py - Run:
./.conda/bin/python -m retained_benefit.retained_benefit_model
- Run live viewer:
./.conda/bin/python -m retained_benefit.retained_benefit_pygame_ui
- Regenerate website replay bundle:
./.conda/bin/python -m retained_benefit.utils.export_github_pages_demo
- Edit parameters in
- Current status:
- implements continuous cooperation traits plus inherited lineage labels on a spatial lattice
- treats
retained_benefit_fractionas the primary abstraction parameter - includes a Pygame viewer that can switch between cooperation intensity and lineage structure
- exports a sampled website replay bundle from a frozen public config
- writes JSON logs for headless analysis and can show a small Matplotlib summary figure
On 2026-04-17, spatial_prisoners_dilemma/ was added as a new experimental
module in this repository.
Stepwise impact:
- The repo now contains a dedicated spatial Prisoner's Dilemma package rather than only a future experiment note.
- The new module follows the same package-run convention as the newer models:
edit the config file, then run it from the repo root with
python -m. - The implementation keeps the external model's central mechanism family: local pairwise PD interactions, fallback movement, local reproduction, inheritance, mutation, death, and a hard population cap.
- The default world size is reduced so the model remains practical as a pure Python CPU simulation in this repo.
- The package is canonical on the Python side now and has a matching
human-cooperation-sitepage and replay route.
On 2026-04-18, spatial_prisoners_dilemma/ gained a frozen website-demo config
and replay export pipeline.
Stepwise impact:
spatial_prisoners_dilemma/config/spatial_prisoners_dilemma_website_demo_config.pynow freezes the public site run.spatial_prisoners_dilemma/utils/export_github_pages_demo.pynow exports a sampled static replay bundle underdocs/data/spatial-prisoners-dilemma-demo/.- The sibling
human-cooperation-siterepo now has a matching page and replay route at/evolved-cooperation/spatial-prisoners-dilemma/. - Cross-repo fidelity for this module now includes both the explanatory docs page and the sampled browser replay data bundle.
Install dependencies:
pip install numpy pygame matplotlib torchFor Pygame visualization, you may need:
conda install -y -c conda-forge gcc=14.2.0- Original NetLogo models from Uri Wilensky and the EACH unit (Evolution of Altruistic and Cooperative Habits)
- See
spatial_altruism/README.mdfor more details

