Federated Fine-Tuning of Tiny-SD with Differential Privacy Support #6428
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue
Federated Fine-Tuning of Tiny-SD with Differential Privacy Support
Description
This PR introduces an end-to-end example of federated fine-tuning for a diffusion model using Segmind Tiny-SD within the Flower Federated Learning (FL) framework.
The implementation demonstrates how Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning of a diffusion model in a distributed setting, making training feasible for clients with limited computational resources.
The Oxford Flowers dataset is used as the training dataset, providing a standard image-generation benchmark for evaluating model adaptation.
🔐 Privacy-Preserving Training Extensions
In addition to standard federated fine-tuning, this example integrates Differential Privacy (DP) mechanisms to showcase privacy-aware model training.
Two levels of privacy protection are supported:
1️⃣ Sample-Level Privacy
Implemented using Opacus
Uses DP-SGD with gradient clipping and noise injection
Protects the contribution of individual training samples during local client updates
2️⃣ Output-Level Privacy
Applied to model updates or outputs before sharing
Supports both Laplace and Gaussian noise mechanisms
Reduces the risk of information leakage from shared model parameters
Related issues/PRs
#6045
Proposal
This contribution demonstrates:
Federated LoRA-based fine-tuning of a diffusion model
Integration of sample-level and output-level differential privacy
Practical trade-offs between model quality and privacy guarantees
How privacy-enhanced training can be combined with parameter-efficient adaptation in decentralized environments
Overall, this PR provides a practical reference for privacy-preserving federated training of generative models under resource constraints.
Explanation
Checklist
#contributions)Any other comments?