This repository contains a multi-round simulation of Cardano stake pools and delegators powered by large language models (LLMs). Pool operators adjust their parameters in response to market signals, delegators rebalance their stake, and the simulation records the resulting network dynamics round by round.
- Multi-round environment where delegators and pool operators react to shared network briefings.
- Pool operators and delegators are persona-driven agents that call an OpenAI-compatible LLM to justify and execute their decisions.
- Delegator wealth follows a configurable power-law distribution; stake can be split across multiple pools every round.
- Each run logs full round transcripts, JSON state history, and inequality metrics (Gini coefficients) for later analysis.
main.py– CLI entry point; parses arguments and starts a simulation run.simulation.py– Orchestrates rounds, builds network briefings, aggregates rewards, and streams logs/results toresults/<timestamp>/.pool_agents.py– Persona-aware stake pool operators that revise pledge, margin, and cost after each round via an LLM call.user_agents.py– Delegator personas that decide how to split stake across pools. Ensure this module is available before running the simulation.constants.py– Network-wide parameters (TOTAL_REWARDS,S_OPT,A0) used in the Cardano reward formula.
- Python 3.10 or newer.
- Dependencies listed in
requirements.txt(pip install -r requirements.txt).
If youruser_agents.pyrelies onpydantic, install it alongside the listed packages. - An OpenAI-compatible endpoint. By default the code targets a local Ollama server.
- Clone the repository
git clone <repo-url> cd Cardano
- Install dependencies
pip install -r requirements.txt
- Configure environment variables
Create a.envfile in the repository root (or export the variables in your shell):To use OpenAI or another provider, setOLLAMA_BASE_URL=http://localhost:11434/v1 OLLAMA_API_KEY=ollama OLLAMA_MODEL=qwen2.5:7b-instruct LLM_TEMPERATURE=0.0
OLLAMA_BASE_URLto the service URL and provide the corresponding API key.
python main.py --rounds 10 --users 50 --pools 5--rounds– number of epochs to simulate (default2).--users– delegator count (default2).--pools– active stake pools (default2).
Each delegator and pool operator triggers an LLM request per decision, so larger runs can take time and consume credits when using paid endpoints.
Every run creates a timestamped folder under results/, e.g. results/20251015-104646/, containing:
simulation_log.txt– human-readable round summaries, briefings, and agent actions.simulation_results.json– structured state snapshots per round, including allocations, rewards, and Gini metrics.
The console also streams delegation decisions and the latest inequality metrics to help monitor long simulations.
- CLI parameters – Adjust rounds, users, and pools per invocation.
- Network constants – Tweak
TOTAL_REWARDS,S_OPT, andA0inconstants.pyto explore alternative reward curves. - Stake distribution – Update
generate_powerlaw_stakesinsimulation.py(alpha, min/max stake, seed) to change the initial wealth profile. - Agent personas – Edit persona weights/prompts in
simulation.py,user_agents.py, andpool_agents.pyto model new behaviors. - LLM configuration – Override
OLLAMA_MODEL,OLLAMA_BASE_URL,OLLAMA_API_KEY, orLLM_TEMPERATUREin the environment.
- The simulation is synchronous and runs entirely on the local machine; provide a responsive LLM endpoint to keep rounds moving.
- Logs are rewritten after each round to avoid partial files; keep the process running until completion to capture the full history.
- The repository ignores the
results/directory by default. Add generated artifacts to version control only if you need to share them.
MIT License
该仓库实现了一个依赖大语言模型的多轮 Cardano 质押池模拟。质押池运营者会根据市场信号调整参数,委托者会重新分配质押,系统逐轮记录网络动态。
- 多轮博弈环境:委托者与运营者都会响应网络通报做出决策。
- 质押池与委托者均由设定的人格驱动,通过 OpenAI 兼容接口调用 LLM 给出理由与操作。
- 委托者初始资产遵循可配置的幂律分布,并可在每轮把筹码拆分到多个池。
- 每次运行都会生成日志、结构化 JSON 历史,以及用于衡量不平等的 Gini 系数。
main.py:命令行入口,解析参数并启动模拟。simulation.py:统筹每一轮,构建网络通报,汇总奖励,并把结果写入results/<timestamp>/。pool_agents.py:基于人格的质押池运营者,借助 LLM 调整 pledge、margin 与 cost。user_agents.py:委托者人格与分配策略实现。运行前请确认该模块可用。constants.py:Cardano 奖励公式所需的网络常量(TOTAL_REWARDS、S_OPT、A0)。
- Python 3.10 及以上版本。
- 运行
pip install -r requirements.txt安装依赖。
如果user_agents.py用到了pydantic,请额外安装该库。 - 一个 OpenAI 兼容的推理接口;默认使用本地 Ollama 服务。
- 克隆仓库
git clone <repo-url> cd Cardano
- 安装依赖
pip install -r requirements.txt
- 配置环境变量
在仓库根目录创建.env(或直接在 shell 中导出):如需调用 OpenAI 等外部服务,请将OLLAMA_BASE_URL=http://localhost:11434/v1 OLLAMA_API_KEY=ollama OLLAMA_MODEL=qwen2.5:7b-instruct LLM_TEMPERATURE=0.0
OLLAMA_BASE_URL改为对应地址并提供有效的 API Key。
python main.py --rounds 10 --users 50 --pools 5--rounds:模拟轮数(默认2)。--users:委托者数量(默认2)。--pools:质押池数量(默认2)。
每位委托者与运营者在每轮都会触发一次 LLM 请求,规模较大的实验需要更多时间与算力(或 API 额度)。
运行结束后会在 results/ 下生成时间戳目录(例如 results/20251015-104646/),包含:
simulation_log.txt:便于阅读的轮次记录、广播内容与代理行为。simulation_results.json:结构化的轮次快照,涵盖委托分配、奖励与 Gini 指标。
命令行会实时输出委托分配与最新的 Gini 系数,方便监控长时间运行的实验。
- 命令行参数:按需调整轮数、用户和质押池数量。
- 网络常量:在
constants.py修改TOTAL_REWARDS、S_OPT、A0以探索不同奖励曲线。 - 筹码分布:在
simulation.py的generate_powerlaw_stakes中调整幂律参数、最小/最大值与随机种子。 - 人格与提示词:在
simulation.py、user_agents.py、pool_agents.py中修改人格比例和提示内容。 - LLM 设置:通过环境变量覆盖
OLLAMA_MODEL、OLLAMA_BASE_URL、OLLAMA_API_KEY、LLM_TEMPERATURE。
- 模拟为同步执行,需要稳定、响应快的 LLM 服务以保证每轮顺利运行。
- 日志文件会在每轮结束后整体重写;请在运行结束后再读取结果以避免截断。
- 仓库默认忽略
results/;若需共享实验产出,可自行把需要的文件纳入版本控制。
MIT License