'A definite threat': The fake video phenomenon taking over the internet

You might not be aware of it, but there’s a quiet arms race going on over our collective reality.

The fight is between those who want to subvert it and usher in a world where we no longer believe what we see on our screens and those who want to help preserve the status quo.

Up until this point in time, we have largely trusted our eyes and ears when consuming audio and visual media content, but new technological systems that create something known as deepfakes, are changing that.

And as these deepfake videos nudge into the mainstream, experts are increasingly worried about the ramifications it will have on the information sharing that underpins society.

Dr Richard Nock is the head of machine learning at CSIRO’s Data 61 and understands the daunting potential of the technology that powers deepfake videos.

The fake clips can be generated with a variety of deep learning or machine learning techniques, but the most common is a computer system known as a Generative Adversarial Network (GAN) which was invented in 2014 and uses reams of data to replicate life-like images and video. You can think of it as a kind of data-crunching robotic digital artist – and they’re getting seriously good.

“If you consider deepfakes alone, it’s probably useful to keep in mind these technologies are going to get better and better,” Dr Nock told Yahoo News Australia.

“The limit of these technologies are currently more a limit of the imagination of researchers rather than a limit of the machine.”

‘Social implications are potentially huge’

Of course it’s been possible to manipulate video footage for decades, but doing so took time, highly skilled artists, and a fair bit of money. But artificial intelligence is putting that ability into almost anybody’s hands.

In just a few years, the quality and accuracy of machine systems that create fake videos has improved dramatically and it’s not showing any signs of slowing down.

There has been a handful of examples that have captured the public’s imagination, such as the Barrack Obama video where comedian Jordan Peele controlled his face, or the more recent clip where comedian Bill Hader seamlessly morphs into Tom Cruise and other actors during impersonations.

“I would expect that the quality of deepfakes are going to ramp up even more,” Dr Nock said. “The social implication can be potentially huge.”

Researchers, governments and media organisations are increasingly worried about the impact a wave of fraudulent video media will have on society, and the endless potential for malicious disruption. While Dr Nock hasn’t worked directly on deepfake technology, he has indirectly contributed to the fight against its potential use for nefarious ends.

Former ANU student Giorgio Patrini studied machine learning at the Canberra university under Dr Nock and after graduating at the end of 2016 (and doing more post doctoral research into machine learning) he launched a start-up company in 2018 called Deeptrace Labs which aims to prevent the weaponisation of the fake videos.

A couple of years ago, as the technology was emerging, he couldn’t stop thinking about a question that nobody seemed to be asking: “Is anybody actually asking the question about how and when these tools will be misused?”

“At least not that many people out in the open were asking these questions,” Mr Patrini told Yahoo News Australia.

Inside the deepfake arms race

According to Mr Patrini, the “commodification” of deepfake tools – many of which are freely available online in open-source forums – began really taking hold last year.

Now there are companies like his that are dedicated to using the same kind of machine learning to spot and debunk the fake videos. And the competition is stiff, with the US Pentagon, via its notorious Defense Advanced Research Projects Agency (DARPA), among those working to stay one step ahead of the technology.

“It’s effectively an arms race between the quality of the two - creation and detection,” Mr Patrini explained.

Tracking software analyses a real video of President Obama on the left, and a 'lip-sync' deepfake on the right. Source: UC Berkeley/ Stephen McNally
Tracking software analyses a real video of President Obama on the left, and a 'lip-sync' deepfake on the right. Source: UC Berkeley/ Stephen McNally

While certain other start-up companies seek to embed videos with an encrypted key or digital signature as a way to prove their authenticity, Deeptrace Labs uses its own, propriety-trained machine learning system to spot visual anomalies and irregularities in footage that aren’t perceptible to the human eye, thus proving they have been altered.

Most of all, Mr Patrini and his team are concerned about the implications this type of technology will have when it can be deployed in real time.

“This is a threat that we believe is coming,” he said. “This is a problem ... We as a society need to trust what’s on the other side of a digital channel.”

But already the emergence of the video technology is causing problems in political systems that can be co-opted by fake news, or even just the threat of it.

“It is being weaponised for political means, we don’t even need to wait for that,” Mr Patrini said. “Just the knowledge that this technology is available was enough to pollute discourse, or spark doubt.”

In Malaysia, a political scandal erupted in June this year after a sex tape allegedly featuring the country’s Minister of Economic Affairs emerged. Same-sex activity in Malaysia is illegal but the minister and his supporters, including the Prime Minister, claim the video is a convincing deepfake - a claim that has not been verified.

Meanwhile, in a much less sophisticated example (known as a shallowfake) in the US this year, a video of top Democrat Nancy Pelosi with edited audio to make her sound drunk was shared widely, including in a tweet by US president Donald Trump.

While much of the concern about the subversive power of deepfakes centres on disrupting democratic systems, for now there is one dark corner of the internet helping to drive the ubiquity of the technology, and it’s a consumer product that is often at the forefront of new technology: pornography.

Non consensual celebrity porn videos now proliferate the web. Fake video shows Ariana Grande (left) and actress Scarlett Johansson (right).
These fake non consensual celebrity porn videos now proliferate the web. Here popstar Ariana Grande and actress Scarlett Johansson have been manipulated using the face swapping technology to create deepfake videos.

The internet’s growing non-consensual celebrity porn problem

In a report released this week, Deeptrace Labs lifted the lid on the internet’s deepfake problem and how they’re proliferating at a rapid rate.

“Our research revealed that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to 14,678,” the report states.

A lion’s share of this is down to systems that allow for face swapping female celebrities onto the bodies of adult film stars to create convincing videos. It’s been dubbed non-consensual celebrity pornography, and has become an issue for online sites that have attracted the ire of celebrities for hosting it, including the popular social media site Reddit which coined the term after a subreddit of the same name emerged on the site a couple years ago.

“Another key trend we identified is the prominence of non-consensual deepfake pornography, which accounted for 96 per cent of the total deepfake videos online,” the report said. According to the research, those videos amounted to a total of 134 million views.

“Since Reddit’s removal of /r/Deepfakes on February 7th 2018, deepfakes have become increasingly commodified as new deepfake forums, tools, and services have emerged,” the report said.

Celebrities that are commonly targeted in these videos have struggled to fight back due to the difficulty of enforcing copyright in different jurisdictions around the word.

In an interview with The Washington Post in December, Scarlett Johansson detailed how she had tried to fight to have deepfake videos using her image pulled off the web, but found it an almost impossible task thanks in part to the legal grey area that currently exists.

“The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause,” she lamented.

And depending on how this battle goes, so too could be the trustworthiness of digital media at large.

Do you have a story tip? Email: newsroomau@yahoonews.com.

You can also follow us on Facebook and Twitter, download the Yahoo News app from the App Store or Google Play and stay up to date with the latest news with Yahoo’s daily newsletter. Sign up here.