Lightweight log aggregation and monitoring service. Ingest logs, define alerting rules, and track service health — all through a clean REST API.
- Log ingestion — single or batch, with service/level/tags/metadata
- Log querying — filter by service, level, with pagination
- Alert rules — define threshold-based rules (e.g. "5+ errors in 5 minutes")
- Alert evaluation — background scheduler checks rules every 60 seconds
- Alert cooldown — prevent alert spam with configurable cooldown periods
- Alert acknowledgment — mark alerts as handled
- Service stats — per-service log counts, level breakdowns, active alerts
- Rate limiting — per-service ingest throttling
- Health check — Redis connectivity and service metrics
- Auto-generated API docs — Swagger UI at
/docs
┌──────────────┐ ┌──────────────┐ ┌──────────┐
│ Log Source │────▶│ │ │ │
│ (API call) │ │ FastAPI │────▶│ Redis │
└──────────────┘ │ │◀────│ Store │
│ - Ingest │ │ │
┌──────────────┐ │ - Alerting │ └──────────┘
│ Dashboard │◀────│ - Stats │
│ (REST API) │ │ - Scheduler │
└──────────────┘ └──────────────┘
- REST API (
/api/*) — ingest logs, manage alerts, query stats - Background Scheduler — APScheduler checks alert rules periodically
- Redis — log storage, alert state, cooldown tracking
docker compose up --buildredis-server
pip install -e ".[dev]"
uvicorn app.main:app --reload| Method | Endpoint | Description |
|---|---|---|
GET |
/ |
Service info |
GET |
/api/health |
Health check |
POST |
/api/auth/token |
Generate JWT token |
POST |
/api/ingest |
Ingest a single log |
POST |
/api/ingest/batch |
Ingest multiple logs |
GET |
/api/logs/{service} |
Query logs for a service |
GET |
/api/logs/{service}/count |
Count logs |
POST |
/api/alerts |
Create an alert rule |
GET |
/api/alerts |
List alert rules |
GET |
/api/alerts/{rule_id} |
Get specific rule |
DELETE |
/api/alerts/{rule_id} |
Delete a rule |
GET |
/api/events/{service} |
Get alert events |
POST |
/api/events/{service}/{event_id}/acknowledge |
Acknowledge an alert |
GET |
/api/services |
List all services with stats |
# Ingest a log
curl -X POST http://localhost:8000/api/ingest \
-H "Content-Type: application/json" \
-d '{"service": "api-gateway", "level": "error", "message": "Connection refused"}'
# Create an alert rule
curl -X POST http://localhost:8000/api/alerts \
-H "Content-Type: application/json" \
-d '{"name": "High Errors", "service": "api-gateway", "level": "error", "threshold": 5}'
# Check service stats
curl http://localhost:8000/api/servicesAll settings via environment variables (prefix SENTRYLITE_):
| Variable | Default | Description |
|---|---|---|
SENTRYLITE_REDIS_URL |
redis://localhost:6379/1 |
Redis connection URL |
SENTRYLITE_SECRET_KEY |
change-me-in-production-32chars!! |
JWT signing key |
SENTRYLITE_LOG_RETENTION_HOURS |
168 |
How long logs persist (7 days) |
SENTRYLITE_MAX_LOGS_PER_SERVICE |
10000 |
Max logs stored per service/level |
SENTRYLITE_ALERT_CHECK_INTERVAL_SECONDS |
60 |
How often alert rules are evaluated |
SENTRYLITE_ALERT_COOLDOWN_SECONDS |
300 |
Min time between same-rule alerts |
SENTRYLITE_RATE_LIMIT_INGEST |
500 |
Max ingests per minute per service |
pip install -e ".[dev]"
pytest tests/ -v47 tests covering:
- Redis service — ingestion, retrieval, alert rules, alert events, cooldown, rate limiting
- Alert engine — threshold evaluation, rule triggering, cooldown behavior
- REST API — all endpoints via httpx async client
- Models — Pydantic schema validation and serialization
- FastAPI — async web framework with auto-docs
- Redis — log storage, alert state, cooldown tracking
- APScheduler — background alert evaluation
- Pydantic — data validation and settings
- python-jose + bcrypt — auth and password hashing
- Docker Compose — one-command development setup
MIT