
Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.
Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.
The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.
The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.
European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.
Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals' privacy as required under frameworks such as the General Data Protection Regulation.
