Description: In 2017, Noelle Martin discovered explicit deepfake videos online that used AI technology to superimpose her face onto pornographic scenes. This incident was a continuation of the abuse she had experienced since at least 2012, when she first found doctored still images of herself in similar contexts. Despite the initial lack of legal protections, her advocacy efforts were instrumental in making image-based abuse a criminal offense in Australia.
Editor Notes: Incidents 771 and 772 are closely related in terms of narrative overlap and discussion.
Entidades
Ver todas las entidadesPresunto: un sistema de IA desarrollado por Stanford University , Max Planck Institute , University of Erlangen-Nuremberg , Face2Face , FaceApp y Zao e implementado por Unknown deepfake creators, perjudicó a Noelle Martin.
Estadísticas de incidentes
ID
771
Cantidad de informes
1
Fecha del Incidente
2020-02-06
Editores
Informes del Incidente
Cronología de Informes
elle.com · 2020
- Ver el informe original en su fuente
- Ver el informe en el Archivo de Internet
translated-es-'There's deepfakes of you,' the email read. Instantly, my pulse quickened. Who was this? How did they get my email address? What was a deepfake?
As panic began to set in, I Googled the term and watched, horrified, as clips of …
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.