Autonomous agents for code security auditing
Overview • Configuration • Workflow • Chatbot • Contributing
Hound is a Language-agnostic AI auditor that autonomously builds and refines adaptive knowledge graphs for deep, iterative code reasoning.
- Graph-driven analysis – Flexible, agent-designed graphs that can model any aspect of a system (e.g. architecture, access control, value flows, math, etc.)
- Relational graph views – High-level graphs support cross-aspect reasoning and precise retrieval of the code snippets that back each subsystem investigated.
- Belief & hypothesis system – Observations, assumptions, and hypotheses evolve with confidence scores, enabling long-horizon reasoning and cumulative audits.
- Dynamic model switching – Lightweight "scout" models handle exploration; heavyweight "strategist" models provide deep reasoning, mirroring expert workflows while keeping costs efficient.
- Strategic audit planning - Balances broad code coverage with focused investigation of the most promising aspects, ensuring both depth and efficiency.
Codebase size considerations: While Hound can analyze any codebase, it's optimized for small-to-medium sized projects like typical smart contract applications. Large enterprise codebases may exceed context limits and require selective analysis of specific subsystems.
pip install -r requirements.txtSet up your API keys, e.g.:
export OPENAI_API_KEY=your_key_hereCopy the example configuration and edit as needed:
cp hound/config.yaml.example hound/config.yaml
# then edit hound/config.yaml to select providers/models and optionsNotes:
- Defaults work out-of-the-box; you can override many options via CLI flags.
- Keep API keys out of the repo;
API_KEYS.txtis gitignored and can be sourced.
Note: Audit quality scales with time and model capability. Use longer runs and advanced models for more complete results.
Projects organize your audits and store all analysis data:
# Create a project from local code
./hound.py project create myaudit /path/to/code
# List all projects
./hound.py project ls
# View project details and coverage
./hound.py project info myauditHound analyzes your codebase and builds aspect‑oriented knowledge graphs that serve as the foundation for all subsequent analysis.
Recommended (one‑liner):
# Auto-generate a default set of graphs (up to 5) and refine
# Strongly recommended: pass a whitelist of files (comma-separated)
./hound.py graph build myaudit --auto \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# View generated graphs
./hound.py graph ls myauditAlternative (manual guidance):
# 1) Initialize the baseline SystemArchitecture graph
./hound.py graph build myaudit --init \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# 2) Add a specific graph with your own description (exactly one graph)
./hound.py graph custom myaudit \
"Call graph focusing on function call relationships across modules" \
--iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# (Repeat 'graph custom' for additional targeted graphs as needed)Operational notes:
--autoalways includes the SystemArchitecture graph as the first graph. You do not need to run--initin addition to--auto.- If
--initis used and aSystemArchitecturegraph already exists, initialization is skipped. Use--autoto add more graphs, or remove existing graphs first if you want a clean re‑init. - When running
--autoand graphs already exist, Hound asks for confirmation before updating/overwriting graphs (including SystemArchitecture). To clear graphs:
./hound.py graph rm myaudit --all # remove all graphs
./hound.py graph rm myaudit --name SystemArchitecture # remove one graph- For large repos, you can constrain scope with
--files(comma‑separated whitelist) alongside either approach.
Whitelists (strongly recommended):
- Always pass a whitelist of input files via
--files. For the best results, the selected files should fit in the model’s available context window; whitelisting keeps the graph builder focused and avoids token overflows. - If you do not pass
--files, Hound will consider all files in the repository. On large codebases this triggers sampling and may degrade coverage/quality. --filesexpects a comma‑separated list of paths relative to the repo root.
Examples:
# Manual (small projects)
./hound.py graph build myaudit --auto \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# Generate a whitelist automatically (recommended for larger projects)
python whitelist_builder.py \
--input /path/to/repo \
--limit-loc 20000 \
--output whitelists/myaudit
# Use the generated list (newline-separated) as a comma list for --files
./hound.py graph build myaudit --auto \
--files "$(tr '\n' ',' < whitelists/myaudit | sed 's/,$//')"- Refine existing graphs (resume building):
You can resume/refine an existing graph without creating new ones using graph refine. This skips discovery and saves updates incrementally.
# Refine a single graph by name (internal or display)
./hound.py graph refine myaudit SystemArchitecture \
--iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# Refine all existing graphs
./hound.py graph refine myaudit --all --iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"Notes on refinement:
- Argument order is
graph refine <project> [NAME]. Example:./hound.py graph refine fider AuthorizationMap. If you put the name first, it will be treated as the project. - Refinement uses the stored whitelist from the initial ingestion by default. Passing a new
--fileslist will rebuild ingestion for that run with the new whitelist. - Refinement prioritizes connecting and improving existing structure. It minimizes new node creation and, when refining a single graph, only accepts new nodes that immediately connect to existing nodes (kept to a small number). For broader expansion, prefer
graph build --auto.
What happens: Hound inspects the codebase and creates specialized graphs for different aspects (e.g., access control, value flows, state management). Each graph contains:
- Nodes: Key concepts, functions, and state variables
- Edges: Relationships between components
- Annotations: Observations and assumptions tied to specific code locations
- Code cards: Extracted code snippets linked to graph elements
These graphs enable Hound to reason about high-level patterns while maintaining precise code grounding.
The audit phase uses the senior/junior pattern with planning and investigation:
# Run a full audit with strategic planning (new session)
./hound.py agent audit myaudit
# Set time limit (in minutes)
./hound.py agent audit myaudit --time-limit 30
# Start with telemetry (connect the Chatbot UI to steer)
./hound.py agent audit myaudit --telemetry --time-limit 30
# Enable debug logging (captures all prompts/responses)
./hound.py agent audit myaudit --debug
# Attach to an existing session and continue where you left off
./hound.py agent audit myaudit --session <session_id>Tip: When started with --telemetry, you can connect the Chatbot UI and steer the audit interactively (see Chatbot section above).
Key parameters:
- --time-limit: Stop after N minutes (useful for incremental audits)
- --plan-n: Number of investigations per planning batch
- --session: Resume a specific session (continues coverage/planning)
- --debug: Save all LLM interactions to
.hound_debug/
Audit duration and depth: Hound is designed to deliver increasingly complete results with longer audits. The analyze step can range from:
- Quick scan: 1 hour with fast models (gpt-4o-mini) for initial findings
- Standard audit: 4-8 hours with balanced models for comprehensive coverage
- Deep audit: Multiple days with advanced models (GPT-5) for exhaustive analysis
The quality and duration depend heavily on the models used. Faster models provide quick results but may miss subtle issues, while advanced reasoning models find deeper vulnerabilities but require more time.
What happens during an audit:
The audit is a dynamic, iterative process with continuous interaction between Strategist and Scout:
-
Initial Planning (Strategist)
- Reviews all knowledge graphs and annotations
- Identifies contradictions between assumptions and observations
- Creates a batch of prioritized investigations (default: 5)
- Focus areas: access control violations, value transfer risks, state inconsistencies
-
Investigation Loop (Scout + Strategist collaboration)
For each investigation in the batch:
- Scout explores: Loads relevant graph nodes, analyzes code
- Scout escalates: When deep analysis needed, calls Strategist via
deep_think - Strategist analyzes: Reviews Scout's collected context, forms vulnerability hypotheses
- Hypotheses form: Findings are added to global store
- Coverage updates: Tracks visited nodes and analyzed code
-
Adaptive Replanning
After completing a batch:
- Strategist reviews new findings and updated coverage
- Reorganizes priorities based on discoveries
- If vulnerability found, searches for related issues
- Plans next batch of investigations
- Continues until coverage goals met or no promising leads remain
-
Session Management
- Unique session ID tracks the entire audit lifecycle
- Coverage metrics show exploration progress
- All findings accumulate in hypothesis store
- Token usage tracked per model and investigation
Example output:
Planning Next Investigations...
1. [P10] Investigate role management bypass vulnerabilities
2. [P9] Check for reentrancy in value transfer functions
3. [P8] Analyze emergency function privilege escalation
Coverage Statistics:
Nodes visited: 23/45 (51.1%)
Cards analyzed: 12/30 (40.0%)
Hypotheses Status:
Total: 15
High confidence: 8
Confirmed: 3
Check audit progress and findings at any time during the audit. If you started the agent with --telemetry, you can also monitor and steer via the Chatbot UI:
- Open http://127.0.0.1:5280 and attach to the running instance
- Watch live Activity, Plan, and Findings
- Use the Steer form to guide the next investigations
# View current hypotheses (findings)
./hound.py project ls-hypotheses myaudit
# See detailed hypothesis information
./hound.py project hypotheses myaudit --details
# List hypotheses with confidence ratings
./hound.py project hypotheses myaudit
# Check coverage statistics
./hound.py project coverage myaudit
# View session details
./hound.py project sessions myaudit --listUnderstanding hypotheses: Each hypothesis represents a potential vulnerability with:
- Confidence score: 0.0-1.0 indicating likelihood of being a real issue
- Status:
proposed(initial),investigating,confirmed,rejected - Severity: critical, high, medium, low
- Type: reentrancy, access control, logic error, etc.
- Annotations: Exact code locations and evidence
For specific concerns, run focused investigations without full planning:
# Investigate a specific concern
./hound.py agent investigate "Check for reentrancy in withdraw function" myaudit
# Quick investigation with fewer iterations
./hound.py agent investigate "Analyze access control in admin functions" myaudit \
--iterations 5
# Use specific models for investigation
./hound.py agent investigate "Review emergency functions" myaudit \
--model gpt-4o \
--strategist-model gpt-5When to use targeted investigations:
- Following up on specific concerns after initial audit
- Testing a hypothesis about a particular vulnerability
- Quick checks before full audit
- Investigating areas not covered by automatic planning
Note: These investigations still update the hypothesis store and coverage tracking.
A reasoning model reviews all hypotheses and updates their status based on evidence:
# Run finalization with quality review
./hound.py finalize myaudit
# Re-run all pending (including below threshold)
./hound.py finalize myaudit --include-below-threshold
# Customize confidence threshold
./hound.py finalize myaudit -t 0.7 --model gpt-4o
# Include all findings (not just confirmed)
# (Use on the report command, not finalize)
./hound.py report myaudit --include-allWhat happens during finalization:
- A reasoning model (default: GPT-5) reviews each hypothesis
- Evaluates the evidence and code context
- Updates status to
confirmedorrejectedbased on analysis - Adjusts confidence scores based on evidence strength
- Prepares findings for report generation
Important: By default, only confirmed findings appear in the final report. Use --include-all to include all hypotheses regardless of status.
Create and manage proof-of-concept exploits for confirmed vulnerabilities:
# Generate PoC prompts for confirmed vulnerabilities
./hound.py poc make-prompt myaudit
# Generate for a specific hypothesis
./hound.py poc make-prompt myaudit --hypothesis hyp_12345
# Import existing PoC files
./hound.py poc import myaudit hyp_12345 exploit.sol test.js \
--description "Demonstrates reentrancy exploit"
# List all imported PoCs
./hound.py poc list myauditThe PoC workflow:
-
make-prompt: Generates detailed prompts for coding agents (like Claude Code)
- Includes vulnerable file paths (project-relative)
- Specifies exact functions to target
- Provides clear exploit requirements
- Saves prompts to
poc_prompts/directory
-
import: Links PoC files to specific vulnerabilities
- Files stored in
poc/[hypothesis-id]/ - Metadata tracks descriptions and timestamps
- Multiple files per vulnerability supported
- Files stored in
-
Automatic inclusion: Imported PoCs appear in reports with syntax highlighting
Produce comprehensive audit reports with all findings and PoCs:
# Generate HTML report (includes imported PoCs)
./hound.py report myaudit
# Include all hypotheses, not just confirmed
./hound.py report myaudit --include-all
# Export report to specific location
./hound.py report myaudit --output /path/to/report.htmlReport contents:
- Executive summary: High-level overview and risk assessment
- System architecture: Understanding of the codebase structure
- Findings: Detailed vulnerability descriptions (only
confirmedby default) - Code snippets: Relevant vulnerable code with line numbers
- Proof-of-concepts: Any imported PoCs with syntax highlighting
- Severity distribution: Visual breakdown of finding severities
- Recommendations: Suggested fixes and improvements
Note: The report uses a professional dark theme and includes all imported PoCs automatically.
Each audit run operates under a session with comprehensive tracking and per-session planning:
- Planning is stored in a per-session PlanStore with statuses:
planned,in_progress,done,dropped,superseded. - Existing
planneditems are executed first; Strategist only tops up new items to reach your--plan-n. - On resume, any stale
in_progressitems are reset toplanned; completed items remaindoneand are not duplicated. - Completed investigations, coverage, and hypotheses are fed back into planning to avoid repeats and guide prioritization.
# View session details
./hound.py project sessions myaudit <session_id>
# List and inspect sessions
./hound.py project sessions myaudit --list
./hound.py project sessions myaudit <session_id>
# Show planned investigations for a session (Strategist PlanStore)
./hound.py project plan myaudit <session_id>
# Session data includes:
# - Coverage statistics (nodes/cards visited)
# - Investigation history
# - Token usage by model
# - Planning decisions
# - Hypothesis formationSessions are stored in ~/.hound/projects/myaudit/sessions/ and contain:
session_id: Unique identifiercoverage: Visited nodes and analyzed codeinvestigations: All executed investigationsplanning_history: Strategic decisions madetoken_usage: Detailed API usage metrics
Resume/attach to an existing session during an audit run by passing the session ID:
# Attach to a specific session and continue auditing under it
./hound.py agent audit myaudit --session <session_id>When you attach to a session, its status is set to active while the audit runs and finalized on completion (completed or interrupted if a time limit was hit). Any in_progress plan items are reset to planned so you can continue cleanly.
# Start an audit (creates a session automatically)
./hound.py agent audit myaudit
# List sessions to get the session id
./hound.py project sessions myaudit --list
# Show planned investigations for that session
./hound.py project plan myaudit <session_id>
# Attach later and continue planning/execution under the same session
./hound.py agent audit myaudit --session <session_id>Hound ships with a lightweight web UI for steering and monitoring a running audit session. It discovers local runs via a simple telemetry registry and streams status/decisions live.
Prerequisites:
- Set API keys (at least
OPENAI_API_KEY):source ../API_KEYS.txtor export manually - Install Python deps in this submodule:
pip install -r requirements.txt
- Start the agent with telemetry enabled
# From the hound/ directory
./hound.py agent audit myaudit --telemetry --debug
# Notes
# - The --telemetry flag exposes a local SSE/control endpoint and registers the run
# - Optional: ensure the registry dir matches the chatbot by setting:
# export HOUND_REGISTRY_DIR="$HOME/.local/state/hound/instances"- Launch the chatbot server
# From the hound/ directory
python chatbot/run.py
# Optional: customize host/port
HOST=0.0.0.0 PORT=5280 python chatbot/run.pyOpen the UI: http://127.0.0.1:5280
- Select the running instance and stream activity
- The input next to “Start” lists detected instances as
project_path | instance_id. - Click “Start” to attach; the UI auto‑connects the realtime channel and begins streaming decisions/results.
- The lower panel has tabs:
- Activity: live status/decisions
- Plan: current strategist plan (✓ done, ▶ active, • pending)
- Findings: hypotheses with confidence; you can Confirm/Reject manually
- Steer the audit
- Use the “Steer” form (e.g., “Investigate reentrancy across the whole app next”).
- Steering is queued at
<project>/.hound/steering.jsonland consumed exactly once when applied. - Broad, global instructions may preempt the current investigation and trigger immediate replanning.
Troubleshooting
- No instances in dropdown: ensure you started the agent with
--telemetry. - Wrong or stale project shown: clear the input; the UI defaults to the most recent alive instance.
- Registry mismatch: confirm both processes print the same
Using registry dir:or setHOUND_REGISTRY_DIRfor both. - Raw API: open
/api/instancesin the browser to inspect entries (includesaliveflag and registry path).
Hypotheses are the core findings that accumulate across sessions:
# List hypotheses with confidence scores
./hound.py project hypotheses myaudit
# View with full details
./hound.py project hypotheses myaudit --details
# Update hypothesis status
./hound.py project set-hypothesis-status myaudit hyp_12345 confirmed
# Reset hypotheses (creates backup)
./hound.py project reset-hypotheses myaudit
# Force reset without confirmation
./hound.py project reset-hypotheses myaudit --forceHypothesis statuses:
- proposed: Initial finding, needs review
- investigating: Under active investigation
- confirmed: Verified vulnerability
- rejected: False positive
- resolved: Fixed in code
Override default models per component:
# Use different models for each role
./hound.py agent audit myaudit \
--platform openai --model gpt-4o-mini \ # Scout
--strategist-platform anthropic --strategist-model claude-3-opus # StrategistCapture all LLM interactions for analysis:
# Enable debug logging
./hound.py agent audit myaudit --debug
# Debug logs saved to .hound_debug/
# Includes HTML reports with all prompts and responsesMonitor audit progress and completeness:
# View coverage statistics
./hound.py project coverage myaudit
# Coverage shows:
# - Graph nodes visited vs total
# - Code cards analyzed vs total
# - Percentage completionSee CONTRIBUTING.md for development setup and guidelines.
Apache 2.0 with additional terms:
You may use Hound however you want, except selling it as an online service or as an appliance - that requires written permission from the author.
- See LICENSE for details.
