Crisis comms strategies for protecting against deepfakes

A plan against deepfakes needs to be part of your crisis playbook.

Preparing a deepfakes crisis plan

In April, a bizarre case emerged out of a Maryland high school. An audio clip seemed to depict the principal making abhorrent, racist remarks.

But it turned out the clip was a fake. It was allegedly created with generative AI by the school’s athletic director, who now faces a multitude of criminal charges.

The incident draws attention to just how easy it is to spoof anyone, from the most powerful people on the planet to an obscure school principal. And somewhere in between those two extremes are the principals at your organization, whether those are CEOs, government leaders and other key figures.



Mike Nachshen, the president and owner of Fortis Strategic Communications and former head of international communications for Raytheon Missiles & Defense, is sure we’ll hear about more such cases in the near future.

“Why am I so confident? Because I know people and people are going to people,” he said wryly. “I mean, it’s truly as simple as that.”

Nachshen is an old hand with mis- and disinformation, having combatted Russian operations in Europe, and sits on the cutting edge of AI technology use. AI, he says, has “democratized disinformation.”

He says that while organizations must be aware of AI’s potential, the risk of deepfakes isn’t an all-consuming emergency. It’s another potential crisis that must be planned for, the same way you’d prepare for a tornado or a mass shooting.

“Communicators need to understand this technology and understand how to use it,” he said. “And not just how to use it, but how it can be used against you. How it can be weaponized against your organization.”

Here’s what communicators should know about the risk of deepfakes — and how to prepare your crisis plan for a worst-case scenario.

It can happen to anyone

A deepfake, whether it’s a video, an image or an audio clip, can be made of anyone if there is enough data. Audio deepfakes are particularly easy to make at this juncture, requiring just a few minutes of someone’s voice to train on. With the onslaught of video recording so much of the time, there’s likely enough data on the internet right now to replicate most people.

And while some AI programs prevent cloning the voices of prominent politicians to avoid just this kind of issue, that leaves the rest of us little people vulnerable to deepfakes.

“If there can be deepfakes of the president, there can be deepfakes of you,” Nachshen said.

And at the moment, there’s no real way to prevent this content. Nachshen said he’d be deeply skeptical of anyone who offers a preventative technological fix for deepfakes.

So because prevention is all but impossible right now, it’s necessary to create a reactive deepfake plan as part of a comprehensive crisis strategy.

“There’s not a cookie-cutter approach,” Nachshen said. “Every organization has unique needs and approaches. The way a school might react is very different than the way a publicly traded company is going to react.”

But there are a few best practices every organization can follow. And many of these will sound very familiar to most good crisis comms practices. First, get all the relevant stakeholders in a room when there is no crisis. A deepfake is going to involve stakeholders from IT, HR, legal and, of course, communications.

“It’s establishing that connective tissue, thinking through the different scenarios and contingencies and how you’re going to respond,” Nachshen said. “And making sure that the first time you’re having this conversation isn’t when the you-know-what hits the fan.”

That’s all pretty standard crisis planning. But there are some unique circumstances to consider when prepping for a deepfake crisis.

The first is determining if it is, in fact, a deepfake and not just an embarrassing incident.

“God bless ‘em, some people say stupid stuff on camera and TV all the time,” Nachshen pointed out.

Indeed, there have already been incidences where people have claimed videos were deepfakes when they may in fact have been real. A Turkish opposition candidate claimed that a sex tape that appeared to depict him was a deepfake created by the Russians. Real or fake, he withdrew from the race.

In addition to asking the person depicted if something is a deepfake or not, there are also forensic tools that can help separate fact from fiction. These tools were used in the Maryland case.

Organizations such as Intel have also developed commercialized deepfake detectors for cybersecurity purposes. Tools like this detect barely detectable changes in skin tone caused by blood flow that are invisible to the human eye.

But even if a deepfake is real, that doesn’t mean it requires a response. It depends on who is reacting to and talking about the piece. Just because there’s social media chatter doesn’t mean you need to elevate the issue with a formal statement.

“Again, this is where the planning piece comes in, identifying who the audience is that you’re trying to reach, how you’re going to convey (something), how you’re going to reach out to them,” Nachshen said.

There’s nothing new

If this all sounds familiar, it should. Nachshen maintains that deepfakes don’t present a particularly novel challenge.

“It’s not necessarily the fact that there’s anything magical or special about that AI created this information other than the scope, the speed and scale, and also the quality of it,” he said.

But overall, a sound, classic crisis communications strategy prepped in advance can help fend off even the cleverest deepfake.

Allison Carter is editor-in-chief of PR Daily. Follow her on or LinkedIn.


PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.