A comprehensive AI assistant for Emacs with support for multiple providers (developed for OpenAI and Claude.ai integration) and extensive features for code analysis, text editing, and project management.
- Multi-provider support: OpenAI, Claude, Gemini, Ollama, llama.cpp
- Streaming and non-streaming responses
- Code analysis: explain, fix, review (security, performance, style, bugs)
- Text editing: grammar, style, spelling fixes
- Project context awareness
- Session management with save/load
- Buffer-to-session association
- Integration with org-mode, magit, lsp-mode, projectile
- Download
ai-integration.elto your Emacs configuration directory - Add to your
.emacsorinit.el:
(load-file "~/.emacs.d/ai-integration.el")
(require 'ai-integration)export OPENAI_API_KEY="your-openai-key-here"
export ANTHROPIC_API_KEY="your-claude-key-here"
export GEMINI_API_KEY="your-gemini-key-here";; Basic configuration
(setq ai-default-provider "openai") ; or "claude", "gemini", "ollama", "llama-cpp"
(setq ai-streaming-default t) ; Enable streaming by default
(setq ai-auto-associate-buffers t) ; Auto-associate buffers with sessions
;; Optional: Set API keys in Emacs (less secure than env vars)
(setq ai-openai-api-key "your-key-here")
(setq ai-claude-api-key "your-key-here")
(setq ai-gemini-api-key "your-key-here")
;; Optional: Customize models
(setq ai-openai-model "gpt-4o")
(setq ai-claude-model "claude-sonnet-4-20250514")
(setq ai-gemini-model "gemini-1.5-flash")For Ollama:
(setq ai-ollama-endpoint "http://localhost:11434/api/chat")
(setq ai-ollama-model "llama3.2")For llama.cpp:
(setq ai-llama-cpp-endpoint "http://localhost:8080/v1/chat/completions")
(setq ai-llama-cpp-model "llama-3.2")- Start a chat session:
C-c a RET - Send code for analysis: Select region →
C-c a c e r(explain code region) - Fix text: Select text →
C-c a t g r(fix grammar in region) - Quick actions:
C-c a q→ select action
C-c a RET- Start AI chatC-c a N- New sessionC-c a S- Save sessionC-c a L- Load sessionC-c a P- Change providerC-c a O/C/G- Switch to OpenAI/Claude/GeminiC-c a v- Toggle streamingC-c a k- Cancel request
e r/b/p/f- Explain region/buffer/point/filef r/b/p/f- Fix coder r/b/p/f- Review code (comprehensive)s r/b/p/f- Security reviewp r/b/p/f- Performance reviewy r/b/p/f- Style reviewb r/b/p/f- Bug reviewTAB- Complete at point
s r/b/p/f- Fix styleg r/b/p/f- Fix grammarl r/b/p/f- Fix spellinge r/b/p/f- Explain text
r/b/f/p- Send to existing sessionR/B/F/P- Send to new session
C-c C-c- Send inputC-c C-v- Send input (non-streaming)C-c C-t- Toggle streamingC-c C-r- Regenerate last responseC-c C-k- Cancel requestC-c C-m- Select modelC-c C-y- Copy last responseC-c C-x s- Set system promptC-c C-x t- Use template
;; Select a function and explain it
C-c a c e r
;; Review entire buffer for security issues
C-c a c s b
;; Fix code at point
C-c a c f p;; Fix grammar in selected text
C-c a t g r
;; Improve writing style of paragraph
C-c a t s p
;; Fix spelling in entire buffer
C-c a t l b;; Start chat session
C-c a RET
;; In the chat buffer, type your message and press:
C-c C-c ; Send with streaming
C-c C-v ; Send without streaming- Sessions are automatically associated with source buffers
- Save/load conversations for later reference
- Multiple sessions per buffer supported
- Change providers mid-conversation
- Each provider has optimized settings
- Support for local AI models
- Use predefined conversation templates
- Set custom system prompts for different roles
- Quick actions for common tasks
M-x ai-toggle-debugM-x ai-diagnose-streamingM-x ai-debug-session-stateThis project welcomes contributions! Please feel free to:
- Report bugs and issues
- Suggest new features
- Submit pull requests
- Improve documentation
MIT License - see the source file for details.
Note: This package requires Emacs 27.1+ and active internet connection for cloud AI providers. Local providers (vLLM, Ollama, llama.cpp) work offline once set up.