Press releases in seconds. A cornucopia of content. Automated media analyses. It’s already become a cliche to say generative Artificial Intelligence is going to change the Communications profession.
But there’s a flip side of the “AI is transforming Communications for the better” coin.
AI also makes it easier for anyone to attack your organization’s reputation. Think disgruntled employees, unethical competitors, angry customers — or a bored 15-year-old with too much time on their hands.
That’s because AI democratizes disinformation. It gives anyone the ability to effectively create and spread misinformation at a scope, speed, scale and quality that was previously only the provenance of governments. Here are a few ways this is happening:
AI creates authentically inauthentic content:
That viral image of the pope in a puffy coat? The “photo” of former President Donald Trump being arrested? The “video clip” of President Joe Biden rapping?
Those were all deepfakes — computer generated media of very realistic, yet entirely fabricated content.
And those deepfakes fooled a LOT of people.
AI does an incredible job of creating counterfeit content that looks like the real deal. And it’s only getting better.
There are almost no barriers to entry:
Want to create a deepfake?
All you need is a computer, internet access, and a few bucks.
According to National Public Radio, one researcher recently created a very convincing deepfake video of himself giving a lecture. It took him eight minutes, set him back $11, and he did it using commercially available AIs.
Creating misinformation doesn’t even require specialized programming knowledge. Many commercial AIs can create deepfakes based with a few simple, plain-text prompts.
Creating authentic-looking written content is just as easy and inexpensive.
In 2019, a researcher at Harvard submitted AI-generated comments to Medicaid, which Wired.com reported people couldn’t tell were fake. The researcher created that content using Chat GPT 2.0; Chat GPT 4.0, which is exponentially better, was just released a few weeks ago; a month’s subscription costs $20.
Unprecedented speed and scale
A bad actor doesn’t have to spend hours coming up with misinformation. All it takes is the right prompt, and the AI will spew out an almost endless torrent of misinformation about your brand. Then synch that up with an AI-generated algorithm and they can launch a fake news tsunami on social media aimed squarely at your organization’s reputation.
Uncanny and rapid precision
The communications profession excels at understanding audiences. AIs can’t “understand” audiences like we humans, but AIs certainly can analyze audiences faster, cheaper and perhaps more precisely than we ever could. Then it can use that analysis to create customized, targeted misinformation in near-real time.
AI generated misinformation is already impacting business, politics — and communicators. In May, a deepfake photo of an explosion at the Pentagon went viral on Twitter, boosted by Russian state news. The S&P 500 briefly dropped three-tenths of a percentage point before the PR pros at the Department of Defense and Arlington County Fire Department managed to get the situation under control.
And we’re only on the tip of the AI misinformation iceberg. As a recent joint research paper from Georgetown, OpenAI and Stanford pointed out, “[AI] will improve the content, reduce the cost, and increase the scale of [misinformation] campaign… [and it] will introduce new forms of deception…”
The bad news — there are no silver bullets. No single policy, technical solution or piece of legislation is going to fix the problem.
But there’s also good news:
As trusted communication counselors, we’re uniquely positioned to help our organizations and clients navigate the AI misinformation age. Here’s how:
AI is no more of a fad than the printing press, radio, TV and the Internet.
AI truly is transforming the communications landscape, just like social media started changing the profession in the early 2000s. Today, being able to have an intelligent conversation about social media’s role in a comms strategy is part and parcel of being a professional communicator. AI is following the same arc.
By understanding AI’s strengths, its potential and its numerous limitations, we can then bring our very human communications expertise and judgement to bear on the issue of AI-generated misinformation.
One of the most valuable things communicators bring to the table is a strategic mindset. That frequently means asking the hard questions, and thinking about the things nobody else is considering. Some questions worth asking are:
- How effective is our organization or client at monitoring its reputation and spotting misinformation?
- Do employees and key stakeholders know how to recognize misinformation — AI or otherwise — and discern between fact and fake?
- How are other functions and disciplines in my organizations thinking about AI? Your colleagues in engineering, sales, legal or IT may have very different and valuable perspectives on the technology. It’s worth taking time to understand them.
At its core, dealing with any kind of misinformation — whether human or AI generated — is a crisis response.
One of the basic principles of crisis communications is to understand that successful communications never happens in a vacuum. In almost every organization, there are stakeholders and decision makers whose opinion matters. The time to build relationships and have conversations about how to respond to misinformation is before the crisis, not during.
And then, put pen to paper, and in partnership with those stakeholders, work through the processes and procedures to do things like:
- Rapidly validate information, because not every unflattering video is going to be a deepfake.
- Determine when to spend time and resources responding to misinformation, and when to ignore it.
- Figure out how to rapidly get factual information out to your key stakeholders.
With the promise of any new disruptive technology there are always challenges and generative AI is is no exception. As professional communicators, we owe it to ourselves and those we serve to both understand the opportunities, and to use our skills and expertise to understand and mitigate the risks.