A command-line tool that evaluates Markdown content using LLMs and provides quality scores. Think of it like Vale, but instead of pattern matching, it uses LLMs enabling you to catch subjective issues like clarity, tone, and technical accuracy.
- LLM-based - Uses LLMs to check content quality
- CLI Support - Run locally or in CI/CD pipelines
- Consistent Evaluations - Write structured evaluation prompts to get consistent evaluation results
- Quality Scores & Thresholds - Set scores and thresholds for your quality standards
Install dependencies:
npm installVectorLint supports multiple LLM providers. Choose and configure your preferred provider using environment variables.
Copy the example environment file and configure your API credentials:
cp .env.example .env
# Edit .env with your actual API credentialsConfigure Azure OpenAI in your .env file:
# Azure OpenAI Configuration
LLM_PROVIDER=azure-openai
AZURE_OPENAI_API_KEY=your-api-key-here
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment-name
AZURE_OPENAI_API_VERSION=2024-02-15-preview
AZURE_OPENAI_TEMPERATURE=0.2Configure Anthropic in your .env file:
# Anthropic Configuration
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key-here
ANTHROPIC_MODEL=claude-3-sonnet-20240229
ANTHROPIC_MAX_TOKENS=4096
ANTHROPIC_TEMPERATURE=0.2Configure perplexity in your .env file for optional online search for fact verification:
# Perplexity Configuration
SEARCH_PROVIDER=perplexity
PERPLEXITY_API_KEY=pplx-your-api-key-hereConfigure OpenAI in your .env file:
# OpenAI Configuration
LLM_PROVIDER=openai
OPENAI_API_KEY=your-openai-api-key-here
OPENAI_MODEL=gpt-4o
OPENAI_TEMPERATURE=0.2Model Options:
gpt-4o: Best quality for comprehensive assessments (default)gpt-4o-mini: Cost-optimized for bulk processinggpt-4-turbo: Alternative high-quality option
For consistent evaluation results, it's recommended to use relatively low temperature values (0.1-0.3) to reduce randomness in model responses. This helps ensure more predictable and reproducible quality assessments.
Copy the sample and edit for your project:
cp vectorlint.example.ini vectorlint.iniKeys (PascalCase):
PromptsPath: directory containing your.mdpromptsScanPaths: bracketed list of file patterns to scan (supports only.mdand.txt)
Example (vectorlint.example.ini):
PromptsPath=prompts
ScanPaths=[*.md]
Concurrency=4
Note: vectorlint.ini is git-ignored; commit vectorlint.example.ini as the template.
Prompts are markdown files. VectorLint loads all .md files from PromptsPath and runs each one against your content. The result is an aggregated report with one section per prompt.
- Prompts do not need a placeholder; the file content is injected automatically as a separate message
- Prompts start with a YAML frontmatter block that defines the evaluation criteria (names, weights, and optional thresholds/severities). Keep the body human‑readable
- VectorLint enforces a structured JSON response via the API and parses scores automatically - you don't need to specify output format in your prompts
Run VectorLint without building:
# Basic usage
npm run dev -- path/to/article.md
# See what's being sent to the LLM
npm run dev -- --verbose path/to/article.md
# Debug mode: show prompt and full JSON response
npm run dev -- --verbose --show-prompt --debug-json path/to/article.mdOr make the script executable:
chmod +x src/index.ts
./src/index.ts path/to/article.mdControl which prompts apply to which files using INI sections. Precedence: Prompt:<Id> → Directory:<Alias> → Defaults. Excludes are unioned and win over includes.
Example:
[Prompts]
paths = ["Default:prompts", "Blog:prompts/blog"]
[Defaults]
include = ["**/*.md"]
exclude = ["archived/**"]
[Directory:Blog]
include = ["content/blog/**/*.md"]
exclude = ["content/blog/drafts/**"]
[Prompt:Headline]
include = ["content/blog/**/*.md"]
exclude = ["content/blog/drafts/**"]
Notes:
- Aliases in
[Prompts].pathstie a prompt's folder to a logical name - The CLI derives a prompt's alias from its actual file path and applies the mapping per scanned file
- Run in watch mode (local dev):
npm test - Single run (no watch):
npm run test:run - CI with coverage:
npm run test:ci
Tests live under tests/ and use Vitest. They validate config parsing (PromptsPath, ScanPaths), file discovery (including prompts exclusion), prompt/file mapping, and prompt aggregation with a mocked provider.
