Skip to content

Concerns Raised Over Big Tech's Influence on EU's AI Code of Practice

Published: at 11:12 AM

News Overview

🔗 Original article link: Big Tech watered down AI code of practice - report

In-Depth Analysis

The Corporate Europe Observatory (CEO) report analyzes the evolution of the EU’s Code of Practice on Disinformation concerning AI and highlights specific instances where big tech’s influence allegedly diminished its effectiveness. Key findings include:

Commentary

The allegations raised in the CEO report are serious and warrant close scrutiny. If proven accurate, they suggest that big tech companies are prioritizing their own interests over the public good by undermining efforts to combat AI-generated disinformation. This could have significant implications for the integrity of democratic processes and the public’s ability to access reliable information.

The EU should consider strengthening its regulatory approach to AI-generated disinformation, moving beyond self-regulation and implementing binding rules with robust enforcement mechanisms. Greater transparency in the development of such codes is also essential to ensure that they are not unduly influenced by industry interests. The effectiveness of the EU’s Digital Services Act (DSA), which includes provisions on platform accountability, will also be crucial in addressing this issue. The DSA should be utilized to hold platforms accountable for failing to address harmful disinformation, including that generated by AI.

The reliance on voluntary codes risks creating a “race to the bottom,” where companies prioritize compliance over genuine efforts to mitigate harm. A more proactive and regulatory approach is needed to ensure that AI is developed and used responsibly.


Previous Post
"Tatort" Goes Interactive: AI-Powered Game Aims to Hook Gen Z Fans
Next Post
China Responds to Trump 2025 with Korean War AI Memes: A Propaganda Play