EU calls for labeling AI content

European Commission deputy head Vera Jourova has said that companies deploying generative AI tools like ChatGPT should label AI-generated content to help curb rising instances of fake news using AI tools.

“Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation,” said Jourova at a press conference.

Calls for labeling and regulating generative AI have risen across the developed world amid multiple fake viral videos. For instance, last month fake images of a blast near the Pentagon went viral.

What helped amplify the hoax was the fact that it was also shared by a verified Twitter account by the name “BloombergFeed.” US stocks briefly fell after the fake images appeared.

Multiple such instances of disseminating fake news using AI have cropped up this year including in China where last month police in China’s Gansu province arrested a person surnamed Hong alleging that he used ChatGPT-generated images to spread fake news of a train crash.

Companies like Google, Meta Platforms, and Microsoft are signatories to the EU Code of Practice which calls upon safeguards to tackle disinformation.

At her press conference, Jourova said, “Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users.”

EU Calls Upon Companies to Label AI-Generated Content

Notably, Jourova also warned Twitter, which last week quit the Code and said, “By leaving the Code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinised vigorously and urgently.”

Twitter has been at loggerheads with regulators as the company has fired several employees working in the content moderation teams. Last week, Ella Irwin, Twitter’s head of trust and safety also resigned.

Meanwhile, the EU’s crackdown on AI-generated content is coming at a time when there are fears of Russian-sponsored misinformation targeted at Western users.

We also have the all-important US elections next year where many fear AI could be used to target voters. No wonder, regulators in Europe as well as the US are warming upto AI regulations.

EU is at the Forefront of Drafting AI Regulations

AI stocks have been on fire this year and the Global X Robotics and Artificial Intelligence ETF has gained 37%, outperforming the Nasdaq. – which is among the rare listed pure-play AI stock is up a whopping 191% YTD.

While investors are celebrating the massive rally in AI stocks which has helped lift the entire tech universe and added over $4 trillion to the market cap of Nasdaq 100 stocks, regulators are worried about the risks.

Europe incidentally is at the forefront when it comes to drafting AI regulations and last month European Union lawmakers voted to incorporate tougher amendments to the region’s widely anticipated AI Act which could be the first comprehensive AI regulation globally.

UK’s CMA (Competition and Markets Authority) also launched an initial review of AI models.

In the US, last month lawmakers grilled OpenAI CEO Sam Altman on AI risks – which was preceded by a meeting between Vice President Kamala Harris with CEOs of Anthropic, Microsoft, Alphabet, and OpenAI.

All said, with generative AI making waves, regulators are also looking at ways to prevent the associated risks – especially related to the spread of fake news.

As for tech companies, labeling all AI-generated content might not be easy even as companies ranging from Microsoft to Twitter are pushing for AI regulations.