Generative ML and CSAM: Implications and Mitigations
Generative ML and CSAM: Implications and Mitigations
A collaboration between the Stanford Internet Observatory and Thorn looks at the risks of sexual abuse material produced using machine learning image generators.
A joint report between the Stanford Internet Observatory and Thorn examines implications of fully realistic child sexual abuse material (CSAM) produced by generative machine learning models. Advances in the open-source generative ML community have led to increasingly realistic adult content, to the point that content indistinguishable from actual photographs is likely to be common in the very near future. These same models and techniques have been also leveraged to produce CSAM. We examine what has enabled this state of affairs, the potential societal consequences of the proliferation of such content, and measures that can be taken to minimize harm from current and future visual generative ML models.