Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

Source

Enlarge / An image from Stable Diffusion’s training set compared (left) to a similar Stable Diffusion generation (right) when prompted with "Ann Graham Lotz." (credit: Carlini et al., 2023) On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have [...]