To create a novel deeptech startup that tackles real world problems from a new technical perspective.
Our solution is a product-based software application that protects companies from ML attacks aimed at stealing their models, data or changing the logic of the ML system, by
- Create a virtualenv (Python 3.7)
- Install dependencies inside of virtualenv (
pip install -r requirements.pip) - If you are planning on using the defense, you will need to install
matplotlib. This is not required for running experiments, and is not included in the requirements file
Before you can run any experiments, you must complete some setup:
python generate_data_distribution.pyThis downloads the datasets, as well as generates a static distribution of the training and test data to provide consistency in experiments.python generate_default_models.pyThis generates an instance of all of the models used in the paper, and saves them to disk.
Running an attack: python label_flipping_attack.py
Running an attack: python attack_timing.py
Running an attack: python malicious_participant_availability.py
Running the defense: python defense.py
Recommended default hyperparameters for CIFAR10 (using the provided CNN):
- Batch size: 10
- LR: 0.01
- Number of epochs: 200
- Momentum: 0.5
- Scheduler step size: 50
- Scheduler gamma: 0.5
- Min_lr: 1e-10
Recommended default hyperparameters for Fashion-MNIST (using the provided CNN):
- Batch size: 4
- LR: 0.001
- Number of epochs: 200
- Momentum: 0.9
- Scheduler step size: 10
- Scheduler gamma: 0.1
- Min_lr: 1e-10