‘Deepfake’ video of Facebook CEO raises red flags for PR, news outlets

The social media platform is keeping a phony Mark Zuckerberg ‘world domination’ video up on Instagram without marking it as fake, deepening concerns about such media manipulations.

Facebook is moving cautiously when it comes to removing fake videos—a new wrinkle that threatens to worsen the brand’s reputation for authenticity and accurate news.

It also raises concerns about how such manipulations could be used to damage personal and corporate brands.

The latest video, a creation by an artist exhibiting a gallery show about fake video technology, was uploaded to Instagram and immediately garnered scrutiny for its content: Mark Zuckerberg appears to credit a shadowy organization called Spectre with teaching him how to dominate the world. (Spectre is the sinister organization in the James Bond books and films.)

Facebook says it will follow company procedure to evaluate whether the video violated its rules.

The BBC reported:

The account involved had labelled the video with a #deepfake hashtag.

The Instagram post has now been viewed more than 25,000 times. Copies have also been shared via Facebook itself.

“We will treat this content the same way we treat all misinformation on Instagram,” said a spokesman for the app’s parent company Facebook.

“If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

The artists involved said they “welcomed” Facebook’s decision but still questioned the company’s ethics.

“We feel that by using art to engage and critically explore this kind of technology, we are attempting to interrogate the power of these new forms of computational propaganda and as a result would not like to see our artwork censored by Facebook,” they told the BBC.

This isn’t the first time Facebook has had to grapple with a fake video being shared on social media, and much of the concern about these videos comes with an eye to what promises to be a rancorous 2020 election cycle.

The company had refused to take down a doctored video of House Speaker Nancy Pelosi that appeared to show her slurring her words. Now, Facebook has fewer options for addressing fake videos on its platform.

The BBC continued:

Had Facebook opted to block the post, it could have faced accusations of hypocrisy after refusing to remove a manipulated clip of Ms Pelosi three weeks ago.

That video was not a deepfake, but had been slowed down in parts to make the Democratic leader’s speech appear garbled.

The tech firm said at the time that information posted to its site did not need to be “true”. But it said it would limit how often the video appeared in members’ news feeds, and provide a link to fact-checking sites.

Ms Pelosi subsequently criticised the firm saying: “Right now they are putting up something they know is false.”

“I can take it … But [Facebook is] lying to the public.”

The Washington Post has since reported that Mr Zuckerberg tried to personally contact Ms Pelosi to discuss the matter, but she had not responded.

However, companies other than Facebook have a vested interest in protecting authenticity on social media. The fake video used CBS News’ banner and branding, which didn’t sit well with the news organization.

CBS News wrote:

Lawyers for CBS News have asked Facebook to take down a “deep fake” video that manipulates the words of Mark Zuckerberg, and was not authorized to use the trademark of CBSN, the streaming service of CBS News.

“CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark,” a CBS spokesperson said in a statement.

As of Wednesday evening, the video was still viewable, and Facebook said it had evaluated CBS’ claim and found no violation.

“We take intellectual property rights seriously, and we’ve responded to CBS directly on this issue. At this time, the video remains subject to our standard process,” a Facebook spokesperson said in a statement.

The real version of the video aired on CBSN in September 2017. Zuckerberg, Facebook’s CEO, had used a public live stream to explain the company’s strategy for fighting election interference.

Facebook says it limits distribution of content found to be false, but it doesn’t remove the content nor mark it as a manipulation.

CBS continued:

A Facebook spokesperson confirmed to CBS News Wednesday evening that the Zuckerberg video “has been fact checked as false.” On Instagram, users are not alerted that videos have been rated false. 

In a statement, the spokesperson said: “We will treat this content the same way we treat all misinformation on Instagram. If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.” The spokesperson said that distribution of the video had already been curtailed.

Just how good is “deepfake” technology? You might remember digital recreations of actor Peter Cushing in “Rogue One: A Star Wars Story,” and many such manipulations fail to escape what digital artists call the “uncanny valley,” in which the creation can’t quite convince you it is a real person.

The problem is that machine learning and artificial intelligence are continually making this technology more convincing—and therefore more deceptive.

HuffPost reported:

Here’s how it works: Machine-learning algorithms are trained to use a dataset of videos and images of a specific individual to generate a virtual model of their face that can be manipulated and superimposed. One person’s face can be swapped onto another person’s head, like this video of Steve Buscemi with Jennifer Lawrence’s body, or a person’s face can be toyed with on their own head, like this video of President Donald Trump disputing the veracity of climate change, or this one of Facebook CEO Mark Zuckerberg saying he “controls the future.” People’s voices can also be imitated with advanced technology. Using just a few minutes of audio, firms such as Cambridge-based Modulate.ai can create “voice skins” for individuals that can then be manipulated to say anything.

It may sound complicated, but it’s rapidly getting easier. Researchers at Samsung’s AI Center in Moscow have already found a way to generate believable deepfakes with a relatively small dataset of subject imagery — “potentially even a single image,” according to their recent report.

Perhaps a bigger problem for PR pros is how cheap it’s becoming to make these kinds of videos and the potential havoc they could create for an organization.

HuffPost continued:

“Before the advent of these free software applications that allow anyone with a little bit of machine-learning experience to do it, it was pretty much exclusively entertainment industry professionals and computer scientists who could do it,” she said. “Now, as these applications are free and available to the public, they’ve taken on a life of their own.”

The ease and speed with which deepfakes can now be created is alarming, said Edward Delp, the director of the Video and Imaging Processing Laboratory at Purdue University. He’s one of several media forensics researchers who are working to develop algorithms capable of detecting deepfakes as part of a government-led effort to defend against a new wave of disinformation.

Facebook says it is aware of the scope and seriousness of the problem.

Cnet reported:

“Leading up to 2020 we know that combating misinformation is one of the most important things we can do,” a Facebook spokesperson said in a statement. “We continue to look at how we can improve our approach and the systems we’ve built. Part of that includes getting outside feedback from academics, experts and policymakers.”

Still, there’s no guarantee that fake news will be pulled from the world’s biggest social network even if its systems flag it. That’s because Facebook has long said it doesn’t want to be “arbiters of truth.” Its community standards explicitly state that false news won’t be removed, though it will be demoted in its News Feed. “There is also a fine line between false news and satire or opinion,” the rules state. (Facebook will remove accounts if the user misleads others about their identity or purpose and content that incites violence.)

How is your organization preparing for these kinds of videos, PR Daily readers?

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.