- Overview
- Technology Stack
- Project Structure
- API Design
- Database Design
- Caching and Rate Limiting
- Development Setup
- Deployment
- Best Practices
Luna Modelling API is a FastAPI-based service that provides Kalman filtering capabilities through a RESTful API interface. The service is designed to process time series data using state-of-the-art filtering techniques while maintaining scalability and performance.
- RESTful API endpoints for Kalman filtering
- Account-based authentication
- Rate limiting and quota management
- Containerized deployment
- Asynchronous processing
- Cache-database synchronization
- Python 3.9+: Main programming language
- FastAPI: Modern, fast web framework for building APIs
- SQLAlchemy: SQL toolkit and ORM
- Pydantic: Data validation using Python type annotations
- NumPy: Numerical computing library for Kalman filter implementation
- Redis: In-memory data store for caching and rate limiting
- PostgreSQL: Primary database
- Docker: Containerization
- Docker Compose: Multi-container orchestration
- asyncpg: Asynchronous PostgreSQL driver
- alembic: Database migration tool
- uvicorn: ASGI server
- python-jose: JWT token handling
- passlib: Password hashing
luna_modelling_api/
├── api/
│ ├── routes/
│ │ ├── __init__.py
│ │ └── kalman.py
│ └── deps.py
├── core/
│ ├── config.py
│ └── security.py
├── models/
│ ├── account.py
│ └── base.py
├── modelling/
│ ├── kalman_filter.py
│ └── constants.py
├── alembic/
│ └── versions/
├── tests/
├── docker/
├── .env
├── docker-compose.yml
├── Dockerfile
└── requirements.txt
- API Key-based authentication
- Keys stored in database with UUID format
- Verification through dependency injection
POST /api/v1/{account_id}/kalmanInput Schema:
class KalmanInput(BaseModel):
results: List[float]Output Schema:
class KalmanOutput(BaseModel):
processed_results: List[float]
input_data: List[float]- 404: Resource not found
- 403: Authorization error
- 422: Validation error
- 429: Rate limit exceeded
- 500: Internal server error
CREATE TABLE accounts (
id SERIAL PRIMARY KEY,
account_name VARCHAR(255) UNIQUE NOT NULL,
api_key UUID UNIQUE NOT NULL,
quota_limit INTEGER NOT NULL DEFAULT 500,
quota_used INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);class Account(Base):
__tablename__ = "accounts"
id = Column(Integer, primary_key=True)
account_name = Column(String, unique=True, nullable=False)
api_key = Column(UUID(as_uuid=True), unique=True, nullable=False)
quota_limit = Column(Integer, nullable=False, default=500)
quota_used = Column(Integer, nullable=False, default=0)Key: user:{user_id}:quota
Value: {remaining_quota}
Expiry: None (Managed through sync process)
- Update Redis in real-time
- Track delta changes
- Periodically sync with database
- Reset delta counter after sync
- Emit quota change events
- Process events asynchronously
- Update both Redis and database
- Maintain event log for audit
- Clone repository
- Create virtual environment
- Install dependencies:
pip install -r requirements.txt
- Set up environment variables:
cp .env.example .env
- Run migrations:
alembic upgrade head
- Start development server:
uvicorn main:app --reload
docker-compose up --buildDockerfile:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]docker-compose.yml:
version: "3.8"
services:
api:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql+asyncpg://user:password@db:5432/luna
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
db:
image: postgres:13
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=luna
redis:
image: redis:6- Use Pydantic models for request/response validation
- Implement proper error handling
- Use dependency injection for common functionality
- Document API endpoints using OpenAPI/Swagger
- Implement rate limiting and quotas
- Use migrations for schema changes
- Implement proper indexing
- Use connection pooling
- Implement retry mechanisms
- Regular backups
- Implement proper cache invalidation
- Use atomic operations
- Handle cache misses gracefully
- Regular synchronization with database
- Monitor cache hit/miss rates
- Store API keys securely
- Implement rate limiting
- Use HTTPS in production
- Regular security audits
- Input validation
- Implement logging
- Monitor API usage
- Track error rates
- Monitor system resources
- Set up alerts
- Unit tests for core functionality
- Integration tests for API endpoints
- Load testing for performance
- Regular security testing
- Automated CI/CD pipeline
- Implement WebSocket support for real-time updates
- Add support for batch processing
- Implement more sophisticated rate limiting
- Add support for different Kalman filter configurations
- Implement analytics dashboard
- Add support for different time series models
- Implement automatic scaling based on load
- Add support for data export/import