Git for Prompts β Version control that actually understands your LLM prompts
PIT is a semantic version control system designed specifically for managing LLM prompts. Unlike traditional Git workflows, PIT understands the meaning of your promptsβtracking not just what changed, but why it matters for your AI's behavior.
Stop treating prompts like plain text files. Start versioning them like the critical assets they are.
| Feature | Description |
|---|---|
| Semantic Versioning | Track prompt changes with meaningful version numbers |
| Automatic Variable Detection | Extracts Jinja2 template variables ({{variable}}) on commit |
| Rich Diff Visualization | Compare versions with syntax highlighting |
| Tagging System | Mark important versions (production, stable, experimental) |
| Instant Checkout | Switch between prompt versions instantly |
| Query Language | Search: success_rate >= 0.9, content contains 'be concise' |
| Feature | Description |
|---|---|
| Shareable Patches | Export/import prompt changes as .promptpatch files |
| Prompt Bundles | Package and share prompts with dependencies |
| Time-Travel Replay | Test same input across all versions |
| Git-Style Hooks | Validation and automation (pre-commit, post-checkout) |
| External Dependencies | Depend on prompts from GitHub, local paths, or URLs |
| Feature | Description |
|---|---|
| A/B Testing | Statistically significant comparisons with scipy-powered t-tests |
| Performance Tracking | Monitor tokens, latency, success rates, costs per version |
| Regression Testing | Automated test suites to catch prompt degradations |
| Analytics Dashboard | Rich terminal charts and HTML reports |
| Binary Search (Bisect) | Find which version broke behavior |
| Worktrees | Multiple prompt contexts without switching |
| Stash | Save WIP with full context |
| Feature | Description |
|---|---|
| Security Scanner | OWASP LLM Top 10 compliance checking |
| Prompt Injection Detection | Catch malicious input patterns |
| PII/API Key Detection | Prevent data leakage |
| Auto-Optimizer | AI-powered prompt improvement suggestions |
| Semantic Merge | Categorize changes and detect conflicts |
PIT now includes a beautiful web-based dashboard for visualizing prompt data:
# Launch the Streamlit dashboard
streamlit run pit-dashboard.pyFeatures:
- π Visual Timeline β Track version metrics over time
- π Side-by-Side Diff β Compare prompt versions
- π Metrics Dashboard β Success rate, latency charts
- π§ͺ Interactive Replay β Test inputs across versions
- π¬ A/B Test Results β View experiment results
pip install prompt-pitOr with optional LLM provider support:
# With Anthropic Claude support
pip install prompt-pit[anthropic]
# With OpenAI support
pip install prompt-pit[openai]
# With everything
pip install prompt-pit[all]# Create a new prompt repository
mkdir my-prompts
cd my-prompts
pit init# Add a prompt file
pit add system-prompt.md --name "customer-support" \
--description "AI assistant for customer support"# Commit a new version
pit commit customer-support --message "Added empathy guidelines"
# View version history
pit log customer-support
# Compare versions
pit diff customer-support --v1 1 --v2 2
# Checkout a specific version
pit checkout customer-support --version 1
# Tag a version
pit tag customer-support --version 2 --tag productionpit init # Initialize a new PIT project
pit add <file> # Add a prompt to track
pit list # List all tracked prompts
pit commit <prompt> # Save a new version
pit log <prompt> # View version history
pit diff <prompt> # Compare versions
pit checkout <prompt> # Switch to a version
pit tag <prompt> # Manage tags# Patches
pit patch create <prompt> v1 v2 --output fix.patch
pit patch apply fix.patch --to <prompt>
# Hooks
pit hooks install pre-commit
pit hooks run pre-commit --prompt <prompt>
# Bundles
pit bundle create my-bundle --prompts "p1,p2" --with-history
pit bundle install my-bundle.bundle
# Replay
pit replay run <prompt> --input "Hello" --versions 1-5
pit replay compare <prompt> --input "Hello" --versions 1,3,5
# Dependencies
pit deps add shared github org/repo/prompts --version v1.0
pit deps install
# Worktrees
pit worktree add ./feature-wt <prompt>@v2
# Stash
pit stash save "WIP: improving tone"
pit stash pop 0
# Bisect
pit bisect start --prompt <prompt> --failing-input "bad query"
pit bisect good v1
pit bisect bad v5
# Testing
pit test create-suite --name "support-tests"
pit test add-case support-tests --name "greeting"
pit test run <prompt> --suite support-tests
# A/B Testing
pit ab-test <prompt> --variant-a 2 --variant-b 3 --sample-size 100
# Security
pit scan <prompt>
pit validate <prompt> --fail-on high
# Optimization
pit optimize analyze <prompt>
pit optimize improve <prompt> --strategy detailed
# Analytics
pit stats show <prompt>
pit stats report <prompt> --output report.htmlmy-prompts/
βββ .pit/ # PIT database and config
β βββ config.yaml # Project configuration
β βββ pit.db # SQLite database
βββ prompts/ # Your prompt files
β βββ customer-support.md
βββ .pit.yaml # Optional: global config
Create .pit.yaml in your project root:
# LLM Provider Configuration
llm:
provider: anthropic # anthropic, openai, ollama
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-sonnet-20240229
# Default settings
defaults:
auto_commit: false
require_tests: true
# Security policies
security:
max_severity: medium # fail on medium+ severity issues
# Performance thresholds
performance:
max_latency_ms: 2000
min_success_rate: 0.95| Feature | Git | PIT |
|---|---|---|
| Line-by-line diff | β | β |
| Semantic understanding | β | β |
| Variable tracking | β | β |
| Performance metrics | β | β |
| A/B testing | β | β |
| Security scanning | β | β |
| Prompt optimization | β | β |
| Shareable patches | β | β |
| Git-style hooks | β | β |
| Query language | β | β |
| Time-travel replay | β | β |
| Bundle packaging | β | β |
| External dependencies | β | β |
| LLM framework integration | β | β |
# Run all tests
pytest
# With coverage
pytest --cov=pit
# Run specific test file
pytest tests/test_core/test_security.py -vWe welcome contributions! Please see our Contributing Guide for details.
PIT is released under the MIT License. See LICENSE for details.
- Built with Typer for CLI magic
- Powered by Rich for beautiful terminal output
- Inspired by the need for better prompt management in production LLM systems
- π§ Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
Made with β€οΈ for the LLM community
Where prompts go to evolve π±






