Meta Platforms has announced a new policy prohibiting the use of AI-generated content in political advertising across Facebook and Instagram beginning in 2024. The move aims to combat potential misinformation and manipulation threats posed by rapidly advancing generative AI capabilities.

Under the new rules, political ads containing synthesized media like fake videos or audio generated by AI tools will be banned. Advertisers will also have to confirm if an ad includes photo-realistic content generated by AI that could mislead viewers.

The policy shift comes amid growing concerns over “deepfakes” and other AI-powered techniques that make it easier than ever to spread false or misleading information intended to deceive and influence the public.

Combating Potential Misuse of AI Ahead of the 2024 US Presidential Election

AI-generated media presents new ways to potentially distort reality and deceive people at scale. There are fears that generative AI could fuel unprecedented levels of misinformation during the 2024 U.S. presidential race and other upcoming major elections worldwide.

By banning the usage of synthetic AI content for political ads, Meta aims to mitigate these risks preemptively within its platforms. The company has faced harsh criticism in the past for enabling election interference and the spread of misinformation and disinformation on its apps.

Meta also recently prohibited advertisers across all industries from utilizing its own new generative AI ad-creation tool to promote content related to politics, social issues, health, finance, etc.

“As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals, or Financial Services aren’t currently permitted to use these Generative AI features”, the company stated.

Closer Monitoring of Political Ads is Coming to Meta’s Platforms as Well

In addition to the AI ban, Meta says that new labeling practices will provide greater transparency around political ads leading up to 2024.

Advertisers will have to confirm during the ad submission process whether the content was “digitally created or altered” to deceptively depict events, to make it seem like a real person did something they did not, or to mislead people about the origin of an audio, image, or video.

Ads confirmed by the sponsors to contain manipulated media must include a “Digitally created or altered” label viewable to all users. Meta’s ad transparency tools will also catalog if an ad utilized AI-generated imagery or audio.

These measures aim to equip users with additional context and indicators around political messaging that allows them to quickly identify ‘fake news’ and misinformation. Meta Platforms (META) stated that these enhanced disclosures will also enable voters to carefully scrutinize ads and make more informed decisions.

Criticism and Calls for Broader Reform from Legislators Continue

The policy changes received mixed reactions from regulators and advocacy groups. While praising it as a positive step, many emphasized that voluntary measures alone are insufficient without legal mandates.

Senator Amy Klobuchar called the move “a step in the right direction” but emphasized that legislators cannot rely solely on tech companies self-regulating in this domain. She vowed to keep advocating and pushing for laws that require mandatory AI disclosures and expressly prohibit deceptive usage.

Civil society organizations like the Center for Countering Digital Hate also asserted that platforms need to be legally obliged to address the risks posed by AI disinformation tools.

There are currently several legislative proposals in Congress focused on regulating political deepfakes and manipulative media. Many hope that Meta’s voluntary ban puts pressure on lawmakers to enact more in-depth reforms ahead of the 2024 elections.

However, effectively monitoring and enforcing limitations around AI-generated content remains an immense technical challenge for platforms. Meta itself is actively developing advanced generative AI tools, evidenced by its recent Galactica model debut.

Critics argue that Meta’s new policy lacks sufficient details around enforcement and still leaves wiggle room for abuse by bad actors. The company states that its ban represents the most responsible approach at the time as generative AI capabilities continue to evolve rapidly.

As novel risks emerge around AI disinformation, consumers may expect an increasingly intense debate on preventive policies balancing security and free speech. While Meta’s ban is a positive first step, lasting solutions to generative content dilemmas will likely require coordinated responses across tech, media, and the government.