European Commission Proposes Guidelines for Tech Platforms to Detect AI-Generated Content Ahead of Elections
Print
Modified on: Mon, 12 Feb, 2024 at 12:32 AM
In preparation for the upcoming European elections in May, the European Commission has introduced public consultation on election security guidelines, targeting large online platforms like TikTok, X, and Facebook. The proposed guidelines mandate these platforms to detect artificial intelligence (AI)-generated content, including deepfakes, to counter potential misinformation threats. The draft guidelines recommend measures such as alerting users to potential inaccuracies in generative AI content, guiding users to authoritative sources, and implementing safeguards to prevent the generation of misleading content. The public consultation is open until March 7, reflecting a proactive approach to safeguarding election integrity.
In an effort to fortify the integrity of the upcoming European elections scheduled for May, the European Commission has introduced a public consultation on election security guidelines targeting very large online platforms (VLOPs) such as TikTok, X, and Facebook. The primary focus of these guidelines is to ensure the detection and mitigation of artificial intelligence (AI)-generated content, particularly deepfakes, that could pose threats to democratic processes.
The draft guidelines acknowledge the risks associated with generative AI and deepfakes, emphasizing their potential to mislead voters and manipulate electoral processes. One of the proposed measures includes alerting users on relevant platforms to potential inaccuracies in content produced by generative AI. The guidelines also advocate guiding users to authoritative information sources to foster informed decision-making.
Generative AI has the capacity to create inauthentic and misleading synthetic content related to political actors, events, election polls, contexts, or narratives. Recognizing the severity of these risks, the draft proposes that tech platforms implement safeguards to prevent the generation of misleading content that could significantly impact user behavior.
The public consultation, open until March 7 in the European Union, seeks input on these guidelines, signaling a collaborative approach to addressing the challenges posed by AI-generated content in the context of elections. The proposed "best practices" for risk mitigation draw inspiration from the recently approved legislative proposal, the AI Act, and its non-binding counterpart, the AI Pact.
Specifically addressing AI-generated text, the guidelines recommend platforms indicate, where possible, the concrete sources of information used as input data in the outputs generated. This transparency aims to enable users to verify the reliability of the information and contextualize it appropriately.
While the European Commission has not provided a specific timeline for the implementation of these guidelines under the EU's content moderation law, the Digital Services Act, major tech companies are already taking proactive steps. Meta, in a recent blog post, announced plans to introduce guidelines concerning AI-generated content on platforms such as Facebook, Instagram, and Threads. Content recognized as AI-generated will receive visible labels, contributing to increased transparency and user awareness.
The proposed guidelines reflect a comprehensive and forward-thinking approach to addressing the evolving landscape of AI-generated content, ensuring that technology platforms actively contribute to the preservation of election integrity in the digital age.
(AMAKA NWAOKOCHA, COINTELEGRAPH, 2024)
Did you find it helpful?
Yes
No
Send feedback Sorry we couldn't be helpful. Help us improve this article with your feedback.