Anti-Customization

  • Fawkes

    USENIX《Fawkes: Protecting Privacy against Unauthorized Deep Learning Models》
    Apply subtle adversarial perturbations to images, which shift the facial features in feature space toward a “decoy identity,” causing models trained on the cloaked images to mislearn or fail to recognize the real person.
  • UE

    ICLR 2021《Unlearnable Examples: Making Personal Data Unexploitable》
    Generate error-minimizing noise to reduce the error of one or more of the training example(s) close to zero, which can trick the model into believing there is “nothing” to learn from these example(s).
  • DisDiff

    NeurIPS 2023 《DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models》
    DisDiff strengthens adversarial attacks by analyzing intrinsic image-text relationships, particularly cross-attention, which plays a crucial role in guiding image generation.
  • Glaze

    USENIX 2023《Glaze: Protecting artists from style mimicry by {Text-to-Image} models》
    Extract style feature and add invisible noise that reroutes model perception.
  • AdvDM

    ICML 2023《Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples》
    AdvDM generates imperceptible adversarial examples that break diffusion models’ ability to learn from and imitate artworks, by disrupting feature extraction across the entire reverse denoising process.
  • PhotoGuard

    《Raising the Cost of Malicious AI-Powered Image Editing》
    Provide 2 methods for injecting imperceptible adversarial perturbations designed to disrupt
    the operation of the targeted diffusion models.
  • Anti-DreamBooth

    ICCV 2023《Anti-DreamBooth: Protecting Users from Personalized Text-to-image Synthesis》
    Add subtle noise perturbation to each user's image before publishing.
  • GenWatermark

    CVPR 2024 《Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis》
    GenWatermark simultaneously embeds and detects stealthy watermarks via a paired network framework. By fine-tuning the detector with subject-driven model outputs, it ensures the watermark endures even after model misuse—effectively distinguishing authorized from unauthorized usage without impairing legitimate applications.
  • DUAW

    CVPR 2024 《DUAW: Data-free Universal Adversarial Watermark against Stable Diffusion Customization》
    DUAW embeds imperceptible adversarial perturbations into images, which significantly degrade the output quality of fine-tuned models—even without access to the original data.
  • FT-Shield

    CVPR 《FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models》
    FT-SHIELD embeds invisible, sample-wise watermarks into training images that are designed to be rapidly learned during the early stages of fine-tuning. This ensures that the watermark is effectively transferred into fine-tuned models even with minimal training.
  • ACE/ACE+

    CVPR 2024《Targeted Attack Improves Protection against Unauthorized Diffusion Customization》
    Reveal the vulnerability of diffusion models to targeted attacks and leverage targeted attacks to enhance protection against unauthorized diffusion customization.
  • MetaCloak

    CVPR 2024 《MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning》
    MetaCloak solves the bi-level poisoning problem with a meta-learning framework with an additional transformation sampling process to craft transferable and robust perturbation.
  • EditGuard

    CVPR 2024 《EditGuard: Versatile Image Watermarking for Tamper Localization and Copyright Protection》
    EditGuard offer a meticulous embedding of imperceptible watermarks and precise decoding of tampered areas and copyright information.
  • SimAC

    CVPR 2024《SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models》
    1) Propose an adaptive greedy search for optimal time steps.
    2) Scrutinize the roles of features at different layers during denoising and devise a sophisticated feature-based optimization framework for anti-customization.
  • PID

    ICML 2024 《PID: Prompt-Independent Data Protection Against Latent Diffusion Models》
    PID perturbs the visual encoder’s latent representations so that protected images generate consistently corrupted embeddings.
  • PAP

    NIPS 2024 《Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models》
    PAP first models the prompt distribution using a Laplace Approximation, and then produces prompt-agnostic perturbations by maximizing a disturbance expectation based on the modeled distribution.
  • ImageShield

    Springer《ImageShield: a responsibility-to-person blind watermarking mechanism for image datasets protection》
    Combine traditional transform domain watermarking with an enhanced GAN.
  • IDProtector

    CVPR 2025 《IDProtector: An Adversarial Noise Encoder to Protect Against ID-Preserving Image Generation》
    IDProtector applies imperceptible adversarial noise to portrait photos in a single forward pass, which offers universal protection for portraits against multiple state-of-the-art encoder-based methods, including InstantID, IP-Adapter, and PhotoMaker.
  • CAT

    CVPR 2025《CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models》
    The primary reason adversarial examples are effective as protective perturbations in latent diffusion models is the distortion of their latent representations. Researchers propose the Contrastive Adversarial Training (CAT) utilizing lightweight adapters as an adaptive attack against these protection methods.
  • ID-Cloak

    CVPR 2025 《ID-Cloak: Crafting Identity-Specific Cloaks Against Personalized Text-to-Image Generation》
    IP-Cloak learns a universal cloak based on an identity subspace derived from multiple images of a person.
  • PersGuard

    CVPR 2025《PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models》
    PersGuard implant backdoor triggers into pre-trained T2I models, preventing the generation of customized outputs for designated protected images while allowing normal personalization for unprotected ones.
  • GuardDoor

    CVPR 2025 《GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors》
    The model provider participating in the mechanism fine-tunes the image encoder to embed a protective backdoor, allowing image owners to request the attachment of imperceptible triggers to their images.
  • OmniGuard

    CVPR 2025 《OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking》
    OmniGuard integrates proactive embedding with passive, blind extraction for robust copyright protection and tamper localization and employs a hybrid forensic framework that enables flexible localization watermark selection and introduces a degradation-aware tamper extraction network for precise localization under challenging conditions.