From deepfake to "cheap fake," it's getting harder to tell what's true on your favorite apps and websites
In early 2018 a video that appeared to feature former President Obama discussing the dangers of fake news went viral. The clip, created by comedian Jordan Peele, foreshadowed challenges that have now become all too real. These days, tech firms, media companies and consumers are all routinely forced to make determinations about whether content is authentic or fake — and it's increasingly hard to tell the difference.
Deepfakes are videos and images that have been digitally manipulated to depict people saying and doing things that never happened. Most deepfakes use artificial intelligence to alter video and to generate authentic-sounding audio. These clips are often produced to fool viewers, and are optimized to spread rapidly on social media.
Examples of deepfake content are popping up more frequently. In May, AI startup Dessa created a video that mimicked the voice of YouTube star Joe Rogan. A few weeks later, a video that purported to show Nancy Pelosi slurring her speech went viral on social media. And this week, a video featuring Facebook CEO Mark Zuckerberg speaking like a James Bond villain racked up millions of views.
Generating a convincing, high-definition deepfake video is an expensive and technical process requiring custom AI code that runs on dedicated processing hardware. But the cost of consumer-grade graphical processing units (GPU) dropped rapidly in recent years, leading to the rise of "cheap fake" videos, the term cybersecurity experts use to describe low-quality videos like the Nancy Pelosi hoax clip.
"'Cheap fakes' are the new fake news," said a technology executive familiar with deepfakes, who asked not to be named because he works closely with social media firms. "They're apparently everywhere because AI technology became widely available to both businesses and consumers, and because the hardware is pretty cheap these days."
Deepfake creators are a diverse group. The technology originated with academics at Berkeley and computer vision specialists in Silicon Valley in the 1990s. The first big breakthrough was the Video Rewrite paper, an academic document that proposed solutions for manipulating video and for syncing audio with video. At the time, computerized video processing capabilities were primitive, and machine learning AI systems were mostly theoretical.
Today both the hardware and the software are far more sophisticated. Powerful GPUs intended to process video game content are available to consumers for a few hundred dollars, and open-source AI software has proliferated rapidly. The software used to create deepfake videos is available for free on GitHub, a code repository owned by Microsoft and used by millions of developers, and tutorials are easy to find on YouTube.
Cybersecurity experts are concerned that because the technology is inexpensive and easy to use, deepfakes will be deployed by a large swath of hackers, ranging from nation-states to political agitators to individual hactivists.
"Today [anyone] can make a lot of deepfake content on a run-of-the-mill MacBook," said the tech executive. "They'll probably be used to make funny memes, but they'll most certainly be used by politicians and activists to smear the opposition."