Deepfakes are a problem for social media companies.
The technology that allows users to create lifelike fake videos of another person could have dire consequences for trust in digital media and the ability of our country to have open and honest discussions about the problems facing the nation. It also could help foreign actors attack us with misinformation and propaganda.
And it’s a big problem for communicators tasked with crisis and brand management, developing and preserving trust, and telling authentic stories.
Facebook is introducing new guidelines about these kinds of videos in an attempt to address the problem.
Facebook now says it will flag videos that have been manipulated or edited, but it won’t remove them entirely. The reason for this, the company says, is that leaving up flagged videos helps people have context when the videos appear elsewhere on the internet.
This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.
Facebook also hopes to borrow some credibility from partner organizations as it tries to take on this threat. The blog post highlights partnerships with organizations like University of California Berkley, Reuters, MIT, Microsoft, BBC and others—all in service of trying to put some heft behind its promise.
Some critics say Facebook has been slow to address misleading claims and misinformation on its platform. The company’s decision to allow all political ads to appear without fact-checking in 2019 was met with harsh criticism.
The move may help shore up the company’s brand position ahead of what will undoubtedly be a tumultuous election year.
Facebook is pushing back on deepfake videos as the 2020 presidential campaign ramps up — and the company clearly hopes to avoid a repeat of the fallout from the 2016 election, when it was accused of allowing voter manipulation from fake accounts and thousands of Russian-backed political ads.
Here are some takeaways from Facebook’s crisis response on deepfake videos:
- Education is necessary. When addressing threats like “deepfakes,” don’t expect your audience to be familiar and up to date on all the terms and latest developments. Facebook took the time to explain the dangers before offering its response, even though the crisis made headlines last summer.
- Find partners. Facebook’s credibility has been under attack for the last couple of years, but the social media company isn’t the only organization with an interest in addressing the problem of deepfakes. Finding other organizations to participate in your efforts can provide cover for your crisis efforts.
- Vigilance will be crucial. Facebook admits that the solutions for addressing fake content online are by necessity complex and will require lots of cooperation. Communicators should take note: Only through careful listening and proactive action will organizations be able to protect their reputations from the dangers of the future.
What do you think of Facebook’s efforts to curb the use of deepfakes and address fake content on its platform?