Agent Skills for computational modelers: documentation, reproducibility, publication, and execution.
This repository hosts a curated collection of Agent Skills designed to help researchers and developers develop and share computational models in the social and ecological sciences. Skills are reusable procedural workflows that enhance AI agents to accomplish specialized tasks.
These skills are designed for coding-capable AI agents that can:
- read skill instructions from
SKILL.md - execute shell commands
- inspect and modify repositories
- run local tools such as Python, Git, and Docker
Compatible environments include:
- Visual Studio Code
- VSCodium requires additional effort to get a coding agent set up like GitHub Copilot
- ChatGPT with coding tools enabled
- Claude Code
- Cursor Agent
- (contributions welcome, this is a rapidly evolving space)
At minimum, your agent should support:
- filesystem access
- terminal execution
- multi-file editing
After you have access to a coding agent you'll want to set up Node.js on your system to use the standard npx skills ... to manage your skills collections. Agent skills are simply a set of files installed into a local directory managed by npx skills (either globally for use across all of your projects or into a specific project).
We recommend using the node version manager nvm to flexibly install and manage Node versions.
Security best practices:
- Install only from the official nvm-sh/nvm repository.
- Pin the installer to a specific release tag instead of running an unpinned command.
- Review the installer script before executing it.
- Avoid
sudo npm -g ...; use user-level installs withnvm.
- Install prerequisites
WSL / Linux:
sudo apt update
sudo apt install -y curl ca-certificates gitmacOS (with Homebrew):
brew install curl ca-certificates git- Install
nvmfrom an official tagged release
Choose the latest release tag from: https://github.com/nvm-sh/nvm/releases
export NVM_VERSION="v0.40.4" # change to latest release
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/${NVM_VERSION}/install.sh | bash- Load
nvmin your current shell or close and restart your shell
The following commands should be auto-appended to your shell profile (.bashrc / .zshrc / etc) but in case they aren't, make sure they are present:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completionIf needed, restart your terminal so your shell profile changes take effect.
- Install and use the latest Node LTS
nvm install --lts
nvm alias default 'lts/*'
nvm use --lts- Verify toolchain
node -v
npm -v
npx -v- Continue with skills installation
npx skills add comses/skills
# alternatively, install from github directly
npx skills add https://github.com/comses/skills- Keep Node LTS current (maintenance)
nvm install --lts --reinstall-packages-from=current
nvm use --ltsExamples:
- Cursor: open the project folder and enable Agent mode
- Claude Code: run
claudein the project root - ChatGPT: open the repository in a coding-enabled workspace
Try:
What skills are available from the comses/skills collection?
or:
Read the installed skills and summarize when each should be used.
Generally the skills will always be triggered if you reference them by name, or you can use their associated slash command, e.g.,
Examples:
/document generate ODD+2 documentation for this model.
or
Use the document skill to generate ODD+2 documentation for this model.
/peer-review evaluate this repository for reproducibility readiness
or
Use the peer-review skill to evaluate this repository for reproducibility readiness.
etc.
Other examples:
- "Set up an OSPool batch scaffolder for my sensitivity analysis"
- "Generate a FAIR4RS publication checklist for this model"
- "Generate a FAIR publication checklist for my model's output data"
This repository currently includes five skills covering core computational modeling needs:
Generates and iteratively improves ODD+2 (Overview, Design Concepts, Details) documentation for agent-based models. Use when you have model code and need publication-ready narrative documentation that satisfies the 23-point ODD+2 checklist.
Triggers: "Document my model", "Generate ODD", "Write model narrative"
Creates FAIR4RS metadata with codemeta.json as canonical machine-readable metadata, citation files derived from codemeta.json, publication checklists, and EVERSE-aligned software management plans to ensure your computational artifacts are ready for archival and publication. Use when preparing models for Zenodo, arXiv, or disciplinary repositories.
Triggers: "Prepare for publication", "Generate publication checklist", "Create FAIR metadata"
Generates HTCondor job submission scripts and parameter sweep configurations for running models on the Open Science Grid (OSPool). Use for batch processing, large parameter sweeps, or distributed sensitivity analysis.
Triggers: "Run on OSPool", "Generate HTCondor batch script", "Set up parameter sweep"
Generates Slurm job scripts, job arrays, and resource allocation templates for running models on HPC systems. Use for multi-node simulations or large-scale experiments requiring direct HPC cluster access.
Triggers: "Run on HPC", "Generate Slurm script", "Set up batch array job"
Evaluates computational model submissions for peer review readiness using required CoMSES criteria (ease of execution, documentation thoroughness, and code quality) plus supporting research software quality indicators inspired by EVERSE.
Triggers: "Peer review my model", "Is this model submission ready", "Review codebase quality", "Check reproducibility"
This repository also includes a local-only maintainer skill that is not part of the published skills/ catalog:
Maintainer workflow for refreshing compressed artifacts, references, and eval expectations when upstream standards evolve.
Use cases:
- Refreshing rubric/indicator snapshots after upstream changes
- Keeping
SKILL.md,references,assets, andevals.jsonsynchronized in one PR - Standardizing refresh PR notes for traceability
.
├── .github/
│ └── skills/
│ └── update-skill/ (repository-local maintainer skill)
│ ├── SKILL.md
│ ├── references/
│ │ └── REFRESH-WORKFLOW.md
│ └── assets/
│ └── REFRESH-PR-NOTE-TEMPLATE.md
├── AGENTS.md (repository-specific agent instructions)
├── README.md (this file)
├── CONTRIBUTING.md (contribution guidelines)
├── LICENSE (MIT)
├── .gitignore
├── Makefile (validation shortcuts)
├── docs/ (repository-level documentation)
│ ├── agent-skills-creation-reference.md
│ ├── roadmap.md
│ └── SKILL-TEMPLATE.md (copy/fill template for new skills)
├── evals/ (cross-skill evals and schema)
├── scripts/ (validation and reporting helpers)
└── skills/ (all skill folders)
├── document/
│ ├── SKILL.md
│ └── evals.json
├── fair4rs/
│ ├── SKILL.md
│ └── evals.json
├── ospool/
│ ├── SKILL.md
│ └── evals.json
├── hpc/
│ ├── SKILL.md
│ └── evals.json
└── peer-review/
├── SKILL.md
└── evals.json
- Read AGENTS.md, CONTRIBUTING.md, and docs/agent-skills-creation-reference.md before drafting.
- Review Agent Skills best practices before drafting.
- Ground from real expertise: start from real task runs, corrections, and project artifacts, not generic advice.
- Scope coherently: define one composable unit of work and keep the boundary clear.
- Design for context efficiency: keep
SKILL.mdconcise, move deep detail intoreferences/, and add explicit load conditions. - Prefer defaults over menus: choose one default tool or approach and use alternatives only as fallbacks.
- Create the skill folder with
/create-skillif your agent supports it, or scaffold manually:
mkdir -p skills/your-skill-name
cp docs/SKILL-TEMPLATE.md skills/your-skill-name/SKILL.md
cp skills/document/evals.json skills/your-skill-name/evals.json- Fill in the YAML frontmatter and markdown instructions, then immediately rename
skill_name, replace the copied prompts, and ensurename:matches the folder exactly. - Include optional resources (
assets/,references/,scripts/) as the workflow needs them. - Refine with real execution: test should-trigger and should-not-trigger prompts, review execution traces, and iterate.
- Run the repository validators before opening a PR:
python scripts/validate_individual_skills.py
python scripts/validate_evals_schema.py
python scripts/validate_cross_skills.py evals/cross-skills.json- Submit a pull request with the skill folder, its
evals.json, and the prompts or checks you used to validate it.
Each skill lives in its own folder with a required SKILL.md file:
your-skill-name/
├── SKILL.md (required: frontmatter + instructions)
├── scripts/ (optional: Python/shell scripts for automation)
├── references/ (optional: compressed, detailed docs, checklists, guides)
└── assets/ (optional: templates, icons, example files)
Recommended semantic purpose of each component:
SKILL.md-> orchestration and enforcement language (when to trigger, required workflow steps, output constraints)assets/-> reusable output artifacts (templates, starter files, structured output skeletons)references/-> normative guidance / rules / compressed artifacts (checklists, standards mappings, policy summaries)scripts/-> deterministic automation helpers (validation, generation, extraction)
Authoring guidance:
- Keep operational decision logic in
SKILL.md; do not duplicate it across assets. - Put reusable content the model can copy/fill into
assets/. - Put standards and rule-oriented material in
references/.
Frontmatter (required fields):
---
name: your-skill-name
description: |
Use this skill when...
Triggers: "phrase 1", "phrase 2"
Expected output: ...
license: MIT
---Optional fields:
compatibility: Tool/version requirements
metadata:
domain: computational-modeling | documentation | publication | execution
maturity: alpha | beta | stable
audience: modelers | researchers | data-scientists
category: documentation | quality-assurance | execution | publication
---See CONTRIBUTING.md, AGENTS.md, and docs/VALIDATION.md for full guidance.
See docs/roadmap.md for planned skills expanding into:
- Reproducibility & containerization (Docker, environment capture, snapshot verification)
- Data & lineage tracking (DVC integration, provenance metadata, parameter tracking)
- Analysis & validation (sensitivity analysis frameworks, unit testing templates, notebooks-to-workflows)
- Integration & composability (standard interchange formats, skill composition patterns)
- Agent Skills specification: agentskills.io
- Skills.sh leaderboard: skills.sh
- Agent Skills documentation: agentskills.io
- Agent Skills CLI: github.com/vercel-labs/skills
- Example skills repository: github.com/anthropics/skills
We welcome contributions! See CONTRIBUTING.md for:
- Contribution workflow
- Naming conventions and style guidance
- Review checklist
- Community contact
All skills in this repository are licensed under the MIT License unless otherwise noted in individual SKILL.md files.