From 6332b263865779a85c06641c36b372e7dbb1da15 Mon Sep 17 00:00:00 2001 From: Boyuan Yang Date: Mon, 23 Dec 2024 23:37:31 -0500 Subject: [PATCH] Fix website build --- content/_index.md | 1 - 1 file changed, 1 deletion(-) diff --git a/content/_index.md b/content/_index.md index d41c0db..71fc6c8 100644 --- a/content/_index.md +++ b/content/_index.md @@ -139,7 +139,6 @@ sections: {{< youtube id="9D3ue-xhQkA" >}} {{< /columns >}} Illegally using fine-tuned diffusion models to forge human portraits has been a major threat to trustworthy AI. While most existing work focuses on detection of the AI-forged contents, our recent work instead aims to mitigate such illegal domain adaptation by applying safeguards on diffusion models. Being different from model unlearning techniques that cannot prevent the illegal domain knowledge from being relearned with custom or public data, our approach, namely FreezeGuard, suggests that the model publisher selectively freezes tensors in pre-trained models that are critical to the convergence of fine-tuning in illegal domains. FreezeAsGuard can effectively reduce the quality of images generated in illegal domains and ensure that these images are unrecognizable as target objects. Meanwhile, it has the minimum impact on legal domain adaptations, and can save up to 48% GPU memory and 21% wall-clock time in model fine-tuning. - {{< /columns >}} {{< hr >}} [**View more...**](/projects/trustworthy-ai/)