Google, owned by Alphabet Inc., said that it will restrict election-related questions that its chatbot Bard and artificial intelligence-based (AI-based) search can answer. This change is ahead of the 2024 U.S. Presidential election and will start by early 2024. Google’s focus is on using AI to help voters and those running election campaigns.

Google’s Election Support in 2024

In 2024, elections around the world, including the U.S. Presidential election, will get extra support from Google. This is part of their ongoing work to protect elections. Google is paying special attention to how AI can be useful but is also aware of the challenges it brings.

Here’s what Google is doing ahead of the election:

  • Safeguarding Platforms: Google has been working on making its platforms safe during elections for years. It uses AI to find and remove harmful content. Its new AI technology is expected to respond faster to new threats.
  • Handling AI Products Responsibly: Products like Bard and Search Generative Experience (SGE) are being tested for safety. Google will limit the types of election questions these tools answer from early next year.
  • Helping People Spot AI-Generated Content: Google is introducing:
    • Ads Disclosures: Election advertisers must disclose AI use in their ads.
    • Content Labels: YouTube will require labels on AI-created or altered content.
    • Digital Watermarking: A new tool, SynthID, will mark AI-generated images and audio.
  • Providing Reliable Information to Voters: Google is working with partners to make sure voters get correct information about voting, like where to vote and election results. They’re doing this through:
    • Search: Partnering with groups like Democracy Works and The Associated Press to give reliable voting information.
    • News and YouTube: Launching features to connect people with authoritative election news.
    • Maps: Showing where polling stations are and protecting them from spam.
  • Ads: Google makes advertisers who run election ads verify their identity and clearly state who paid for the ad.
  • Boosting Campaign Security: Google’s Advanced Protection Program helps those involved in elections stay safe online. They’re working with Defending Digital Campaigns (DDC) to provide campaigns the security tools they need.
  • Fighting Cyber Threats: Google’s teams are always watching for and dealing with online threats to elections.

Meta’s Stance Against AI Misinformation in Politics

Meta Platforms, the company behind Facebook and Instagram, has also recently announced a new rule to ban AI-generated content in political ads on its platforms starting in 2024. This policy is being put in place to address the increasing problem of false information and manipulation through advanced AI technology.

Key aspects of Meta’s new policy include:

  • Stopping Synthesized Media: Political advertisements that use fake videos or audio made by AI will not be allowed.
  • Requirement for Checking AI Content: Advertisers need to confirm if their ad includes AI-made content that looks real but could mislead people.

This move comes as there is growing worry about “deepfakes” and other AI methods that make it much easier to spread untrue or misleading information that could trick the public.

The Opportunities and Challenges of AI in Political Campaigns

As AI’s role in elections becomes more prominent, concerns about its potential misuse have escalated. Kevin Pérez-Allen from United States of Care highlights AI’s capabilities in understanding voting patterns and crafting campaign messages. However, he also says that AI can’t take the place of direct interactions with voters, like going door-to-door.

The potential benefits of AI in elections include:

  • Personalized Campaigning: AI could help make campaign messages more specific and relevant to different groups of voters, instead of categorizing voters into very broad groups.
  • Better Access to Information: AI chatbots might be able to explain what a candidate plans to do in a detailed way, using data and possibly in different languages and dialects.

However, there are also big risks. Sinclair Schuller from Nuvalence talks about the danger of deepfakes, which are fake videos or images made by AI. These could wrongly represent political candidates. An example of this was seen in Chicago’s mayoral race in 2023.