Skip to content

Conversation

@aash-mohammad
Copy link

Issue

Federated Fine-Tuning of Tiny-SD with Differential Privacy Support

Description

This PR introduces an end-to-end example of federated fine-tuning for a diffusion model using Segmind Tiny-SD within the Flower Federated Learning (FL) framework.

The implementation demonstrates how Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning of a diffusion model in a distributed setting, making training feasible for clients with limited computational resources.

The Oxford Flowers dataset is used as the training dataset, providing a standard image-generation benchmark for evaluating model adaptation.

🔐 Privacy-Preserving Training Extensions

In addition to standard federated fine-tuning, this example integrates Differential Privacy (DP) mechanisms to showcase privacy-aware model training.

Two levels of privacy protection are supported:

1️⃣ Sample-Level Privacy

  • Implemented using Opacus

  • Uses DP-SGD with gradient clipping and noise injection

  • Protects the contribution of individual training samples during local client updates

2️⃣ Output-Level Privacy

  • Applied to model updates or outputs before sharing

  • Supports both Laplace and Gaussian noise mechanisms

  • Reduces the risk of information leakage from shared model parameters

Related issues/PRs

#6045

Proposal

This contribution demonstrates:

  • Federated LoRA-based fine-tuning of a diffusion model

  • Integration of sample-level and output-level differential privacy

  • Practical trade-offs between model quality and privacy guarantees

  • How privacy-enhanced training can be combined with parameter-efficient adaptation in decentralized environments

Overall, this PR provides a practical reference for privacy-preserving federated training of generative models under resource constraints.

Explanation

Checklist

  • Implement proposed change
  • Write tests
  • Update documentation
  • Make CI checks pass
  • Ping maintainers on Slack (channel #contributions)

Any other comments?

@aash-mohammad
Copy link
Author

aash-mohammad commented Jan 25, 2026

Hi @jafermarq ,

I hope you’re doing well. Regarding 6071, if it hasn’t been completed yet, you may consider closing it, as this current PR already includes the same functionality.

By default, if the privacy-related flags aren’t enabled, it runs as a normal diffusion model fine-tuning. The main difference here is that I’m using a smaller model compared to the previous PR.

Thank you for your time and review!

@github-actions github-actions bot added the Contributor Used to determine what PRs (mainly) come from external contributors. label Jan 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Contributor Used to determine what PRs (mainly) come from external contributors.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant