RogueLLMania is a roguelike built with Electron and rot.js, featuring LLM-powered narration as its core differentiator. The game uses a local AI model (Qwen3-1.7B) running via llama.cpp to generate atmospheric level introductions and artifact descriptions in real-time.
-
Install prerequisites
- Node.js: visit
https://nodejs.org/
- Node.js: visit
-
Clone and install
git clone <repository-url> cd RogueLLMania npm install
-
Run the game (development)
npm run dev
This runs with DevTools enabled.
First run: The app will automatically download the LLM model (Qwen3-1.7B, ~1.19GB) with progress tracking. This is a one-time setup.
Alternatively, start without DevTools:
npm start
-
Build the app
npm run build
Open the generated installer (e.g.,
.dmgon macOS,.exeon Windows) from thedist/directory.
- Movement: WASD or Arrow keys
- Diagonals: Q (up-left), E (up-right), Z (down-left), C (down-right) — keypad 7/9/1/3 also work
- Wait/Pass turn: Space
- Inventory: I (toggle)
- Pick up item: G
- Save: Cmd/Ctrl+S
- Load: Cmd/Ctrl+L
- Restart (only when Game Over): R
- Close overlays: Esc
RogueLLMania includes a comprehensive testing framework for validating and improving LLM-generated narration quality:
# Run all tests
npm test
# Run benchmarks to track quality over time
npm run benchmarkpublic/— HTML, CSS shell for the renderersrc/— Main source codeai/brains/— Monster AI behavior (zombie, chaser)combat/— Combat mechanics, stats, factionscontent/— Static content (artifacts, monsters, system messages)entities/— Game entities (player, monsters, items, story objects)levels/— Level generation (basic, cave, pillared hall), pathfindingmain/llm/— LLM backend (llama.cpp integration)ConfigManager.js— Model config and system promptsLlamaBridge.js— High-level LLM APILlamaManager.js— Low-level llama.cpp wrapperModelDownloader.js— Model download with resume supportschemas.js— JSON schemas for structured output
systems/— Core game systems (renderer, input, FOV, turn engine, world, etc.)tiles/— Tile types and definitionsui/— UI layer and overlaysoverlays/— Inventory, settings, FTUE, level introstartScreen.js— Start screen with model download UImodelDownloadController.js— Download state managementoverlayManager.js— Overlay system
game.js— Game bootstrap and orchestrationllm.js— Renderer-side LLM client (IPC wrapper)main.js— Electron main process
- macOS: Packaged build (.dmg) with code signing. Validated for v0.1.2.
- Windows: Packaged build (.exe). Validated for v0.1.2.
- Linux: Runs via
npm start/Electron; packaged build (.AppImage) not yet tested.
- Model: Qwen3-1.7B-Instruct @ Q4_K_M (~1.19GB download)
- Why this model? Small size, fast inference on consumer hardware, good quality for narrative generation
- Automatic management: Model downloads on first run with progress tracking and resume support
- Fully local: Runs via llama.cpp native bindings (no internet required after download, no external APIs)
- Storage: Model stored in user data directory, validated with SHA256 checksums
- Death state is a soft pause: On death, the engine is locked and a Game Over message is shown, but any subsequent input queues an action and unlocks the engine. In practice you can still move or load a game until you press R to restart (intended for now); a stricter hard-lock game-over flow is planned.
MIT