TG Telegram Group & Channel
RIML Lab | United States America (US)
Create: Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Data Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Data Unlearning in Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
Diffusion models have been shown to memorize and reproduce training data, raising legal and ethical concerns regarding data privacy and copyright compliance. While retraining these models from scratch to remove specific data is computationally costly, existing unlearning methods often rely on strong assumptions or exhibit instability. To address these limitations, we introduce a new family of loss functions called Subtracted Importance Sampled Scores (SISS). SISS leverages importance sampling to provide the first method for data unlearning in diffusion models with theoretical guarantees.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Data Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: Data Unlearning in Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
Diffusion models have been shown to memorize and reproduce training data, raising legal and ethical concerns regarding data privacy and copyright compliance. While retraining these models from scratch to remove specific data is computationally costly, existing unlearning methods often rely on strong assumptions or exhibit instability. To address these limitations, we introduce a new family of loss functions called Subtracted Importance Sampled Scores (SISS). SISS leverages importance sampling to provide the first method for data unlearning in diffusion models with theoretical guarantees.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️


>>Click here to continue<<

RIML Lab






Share with your best friend
VIEW MORE

United States America Popular Telegram Group (US)