Roots Beneath the Cut: Uncovering the Risk of Concept Revival in Pruning-Based Unlearning for Diffusion Models
This paper reveals that pruning-based unlearning in diffusion models is inherently insecure because the locations of pruned weights act as side-channel signals that enable a novel, data-free, and training-free attack to fully revive erased concepts, prompting a call for safer pruning mechanisms that conceal these locations.