As our society moves away from traditional news sources, our dependence on social media for reliable information increases. However, with technology that can spread convincing false information, this can lead to serious issues. Artificial intelligence is now capable of creating social media posts and producing realistic audio and video. These so-called “deepfakes” are becoming a major concern in the fight against disinformation.
Consider, for example, that political candidates can be made to give speeches they never delivered. Jordan Peele proved this point in April 2018 with a simultaneously hilarious and terrifying video of former President Barack Obama delivering a fabricated public service announcement. More recently, Samsung’s research center in Moscow developed a new AI that’s capable of creating photorealistic fake videos from a single photo.
Deepfakes started in 2018 with an anonymous Reddit user aptly named Deepfakes. The user uploaded a machine learning model that could superimpose faces onto videos. All it took to utilize the software was enough photos of the chosen subject’s face, and with Samsung’s latest advancement, it’s becoming easier than ever to create deepfakes.
AI tools have immense potential to make our lives easier, but with today’s proliferation of fake news and propaganda, natural language processing and deep learning also have the potential to be our worst nightmares.
It wouldn’t be wise to underestimate the potential consequences of deepfakes. A 2018 report from eMarketer found that more than 64 million Millennials stream videos or watch videos downloaded onto their devices at least once a month. This fact coupled with the increased use of social media creates the perfect environment for fake videos about hot-button issues to spread rapidly and potentially spark anything from a terrorist attack to a mass riot to even an international war.
Interestingly, the same tech used to create deepfakes is also being used to fight them. Human eyes may not see the evidence of faked information, but AI algorithms can scrape back-end video data to detect manipulation. These AI algorithms can mitigate the damaging effects of deepfakes, but as it’s currently being implemented, this technology isn’t quite keeping up.
For instance, Google has made significant strides but is still far from outrunning deepfakes. The Google News Initiative includes state-of-the-art audio detection technology that releases synthetic speech audio files to train AI to differentiate between real and computer-generated speech. However, because this initiative only addresses audio, it means a large portion of fake media is left undetected.
Facebook has combined humans and AI to combat the problem, employing 27 fact-checking partners in 17 countries to uncover phony videos and posts. Staff members from the Associated Press, Snopes, and FactCheck.org, among others, validate videos filtered out by a sophisticated machine learning model. Unfortunately, this step occurs after the videos are posted, so if fake content isn’t caught quickly enough, there’s ample opportunity for it to go viral.
The AI technology behind deepfakes is already at the point where deceptive videos can be created and shared in real time. To combat this, the AI used to filter deepfakes must be able to run continuously to detect and stop fake content as quickly as possible. Rather than just remove identified fakes, media channels and the people behind deepfake detectors should also prepare to quickly release their own articles and videos that transmit the truth to counteract deception.
Increasing consumption of online media without increasing sensitivity to fakery means the hypotheticals are closer to becoming reality. For now, we all must be more vigilant about what our news contains and the sources of the information presented. Aside from more sophisticated AI, public awareness is the most effective strategy to containing the potential damage of deepfakes.
Given these possibilities, the most interesting race to watch in 2020 may not be between political candidates — but between real people and their AI doppelgängers.