New report finds generative machine learning exacerbates online sexual exploitation
New report finds generative machine learning exacerbates online sexual exploitation
The Stanford Internet Observatory and Thorn find rapid advances in generative machine learning make it possible to create realistic imagery that is facilitating child sexual exploitation.
Generative Machine Learning (ML) models have been rapidly adopted by large numbers of users. Large language model (LLM) chatbots and generative media tools can create human-like responses or any imagined scene in increasingly short timeframes. In just the past few months, some of this text and imagery has become so realistic that it is difficult to distinguish from reality. The public release of these tools also spawned a thriving open-source community dedicated to expanding their capabilities.
Tools for generating realistic images and video are advancing rapidly in the open-source ML community, with a combination of advances in generative ML technology and increasingly powerful processing power available to consumers. However, we have also seen the ability to misuse this technology to create deceptive accounts and content for government propaganda, and for creating non-consensual explicit media used for harassment and extortion.
A new report from researchers at the Stanford Internet Observatory and Thorn, a nonprofit working to address the role of technology in facilitating child sexual exploitation, highlights how this rapidly advancing technology poses a threat for child sexual exploitation, namely the production of increasingly realistic computer-generated child sexual abuse imagery (CSAM). The report outlines a number of potential technical mitigations and areas for industry and policy collaboration on AI ethics and safety measures. However, the researchers warn that the use of generative ML tools for creating realistic non-consensual adult content and CSAM is growing and likely to worsen without intervention by a broad array of stakeholders.