The best way to respond to an AI crisis

The winner of the Ragan Research Award sheds light on tactics in this new field of crisis response.

The winner of the Ragan Research Award focused on how to respond to an AI crisis


Deny. Apologize. Or make an excuse. 

These are three of the main strategies used by organizations during a crisis, including those related to generative AI.  

Two of those work fairly well, according to a paper produced by Sera Choi as part of the second annual Ragan Research Award, in partnership with the Institute for Public Relations 

Choi, a native of South Korea and current PhD candidate at Colorado State University, explored how best to respond to these emerging issues in her paper “Beyond Just Apologies: The Role of Ethic of Care Messaging in AI Crisis Communication.”  

 

 

To examine the best way to respond to an AI-related crisis, Choi created a scenario around a fictitious company whose AI recruiting tool was found to have a bias toward male candidates. 

Participants were shown three response strategies. In one, the company said the AI’s bias did not reflect its views. In the second, it apologized and promised changes. And in the third, the company outright denied the problem. 

Choi told PR Daily it was important to study these responses because generative AI can cause deeper problems than most technological snafus. 

“AI crises can be different than just technological issues, because AI crises can actually impact not only the individual, but also can impact on society,” Choi said.  

The research found that apologies or excuses could be effective – but denials just don’t fly with the public. 

Interestingly, I also observed that the difference in effectiveness between apology and excuse was not significant, suggesting that the act of acknowledgment itself is vital,” she said. 

Still, there could still be times when you need to push back against accusations. 

“While the deny strategy was the least effective among the three, it’s worth noting that there might be specific contexts or situations where denial could be appropriate, especially if the organization is falsely accused. However, in the wake of genuine AI-driven errors, our results underscore the drawbacks of using denial as the primary response strategy,” Choi wrote in the paper.  

Acknowledging bias or other problems in AI is the first step, but there are others that must follow to give an organization the best chance of recovery.  

“Reinforcing ethical responsibility and outlining clear action plans are critical, indicating that the organization is not only acknowledging the issue but is also committed to resolving it and preventing future occurrences,” Choi said. “This could include investments in AI ethics training sessions for employees and collaborations with higher education institutions to conduct in-depth research on ethical responsibilities in the field of AI.” 

Choi is just getting started with her research. In the future, she hopes to expand it into other areas including other kinds of AI crises or issues that affect public institutions. 

“The clear takeaway is that organizations should prioritize transparency and ethical responsibility when addressing AI failures,” Choi said. “By adopting an apology or excuse strategy and incorporating a strong ethic of care, they can maintain their reputation and support from the public even in difficult times.” 

Read the full paper here 

  Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

 

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.