TuneFlow is a lightweight, modular, and extensible distributed hyperparameter tuning engine designed to make hyperparameter optimization simple and efficient. Built with Python, it supports various search strategies and can be easily extended with custom models and search algorithms.
- 🚀 Distributed hyperparameter tuning with parallel trial execution
- 🔍 Multiple search strategies (Random Search included, more to come)
- 🧩 Plugin-based architecture for easy extension
- 📊 Logging and result tracking
- 📝 YAML-based configuration
- 🤖 Support for custom models and search strategies
-
Clone the repository:
git clone https://github.com/yourusername/tuneflow.git cd tuneflow -
Install the package in development mode:
pip install -e .
-
Create a YAML configuration file (e.g.,
examples/iris_random.yaml) with your experiment settings. -
Run the tuning experiment:
tuneflow run examples/iris_random.yaml
-
Monitor the progress and view results in the console and output files.
Here's an example configuration for tuning an XGBoost model on the Iris dataset:
# experiment.yaml
experiment_name: "iris_classification"
model: "xgboost"
strategy: "random"
num_trials: 10
max_parallel: 2
# Dataset settings
dataset_path: "iris.csv" # Uses sklearn's built-in dataset
# Search space
search_space:
max_depth:
type: int
low: 3
high: 10
learning_rate:
type: float
low: 0.01
high: 0.3
# ... more hyperparameters ...
# Fixed model parameters
objective: "multi:softprob"
num_class: 3
random_state: 42tuneflow/
├── tuneflow/
│ ├── __init__.py # Package initialization
│ ├── cli.py # Command-line interface
│ ├── orchestrator.py # Core orchestration logic
│ ├── worker.py # Worker process for trial execution
│ ├── models/ # Model implementations
│ │ ├── __init__.py
│ │ ├── base_model.py # Base model class
│ │ └── xgboost_model.py # Example XGBoost implementation
│ ├── plugins/ # Search strategy implementations
│ │ ├── __init__.py
│ │ ├── base_strategy.py # Base strategy class
│ │ └── random_search.py # Random search implementation
│ └── utils/ # Utility functions
│ ├── __init__.py
│ ├── yaml_loader.py # YAML configuration loader
│ └── logger.py # Logging utilities
├── examples/ # Example configurations
│ └── iris_random.yaml # Example Iris dataset tuning
├── tests/ # Unit tests
├── setup.py # Package installation script
└── README.md # This file
To create a custom model, subclass BaseModel and implement the required methods:
from tuneflow.models.base_model import BaseModel
class MyCustomModel(BaseModel):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.model = None
def fit(self, X_train, y_train, X_val=None, y_val=None):
# Implement model training
pass
def evaluate(self, X, y):
# Implement model evaluation
pass
@classmethod
def get_search_space(cls):
# Return default search space for this model
return {
"param1": {"type": "float", "low": 0.0, "high": 1.0},
# ... more parameters ...
}To create a custom search strategy, subclass SearchStrategy:
from tuneflow.plugins.base_strategy import SearchStrategy
class MyCustomSearch(SearchStrategy):
def __init__(self, search_space: dict, **kwargs):
super().__init__(search_space)
# Initialize your search strategy
def next_config(self, results=None):
# Generate the next configuration
pass
def is_complete(self):
# Check if search is complete
passTo run the test suite:
pytest tests/Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.