New fake video tool can make Barack Obama say anything
We live in scary times. It’s often hard to distinguish reality from fantasy, and it doesn’t help that our very own president, Mr. Donald J. Trump, obsessively rants about “fake news media” anytime he’s criticized.
Sadly, fake news isn’t going anywhere anytime soon, and it’s probably going to get much worse in the near future thanks to new video editing tools being made by scientific researchers.
Researchers at the University of Washington recently announced a new vide-editing tool that they used to superimpose audio — with realistic lip movements — onto a video of former U.S. president Barack Obama, making it appear as though he’s saying whatever they want him to.
The result is frankly terrifying. Scientists — or anyone, really — can literally put words in Obama’s mouth by converting audio sounds into mouth movements, then blending the movements into old video footage. The video looks incredibly realistic, and, to an untrained eye, would appear to be real.
The scientists behind the project think that’s great. They hope the research will eventually be used in Hollywood special effects or to improve the quality of video calls. But there’s also a much scarier side to what this research means.
Soon, the same artificial intelligence system could be used to make fake videos about other celebrities or even regular people like you and me.
The reality of human behavior is what makes this dangerous.
The research team hypothesizes that the computer system could theoretically learn how to make fake videos of basically anyone saying anything.
“Perhaps a single universal network could be trained from videos of many different people, then conditioned on individual speakers e.g. by giving it a small video sample of the new person, to produce accurate mouth shapes for that person,” the report says.
And this is just one of the many breakthroughs in human-mimicking computer programs from the last few years. Google DeepMind AI, a widely accessible open-source platform, can already fake voice recordings, and the Telegraph reports that a newer program called Lyrebird can recreate human speech with just 60 seconds of sample audio. Yikes.
The researchers at the University of Washington, however, argue that the computer system they’ve created could be used as a reverse verification tool, meaning people use it to determine whether they are real or not. But the reality of human behavior is what makes this dangerous.
We’re quick to share, and not many people would take the time to feed videos through a verification tool before sharing a scandalous video with the world. Many people don’t even take time to read stories beyond their headlines, let alone do their due diligence to figure out if something is true or legitimately a fake news story.
These particular tools aren’t publicly available just yet, but there will soon be a time when you’ll have to ask yourself whether a video is real news or fake news. For now, at least, we can bask in awe of just how quickly this research moving forward.