AI for communicators: New developments you need to know

Keeping up with the advance of AI is hard. Let us help.

Advances in AI for communicators


With advances in AI occurring every week, keeping up with the news can be a dizzying, daunting task. That’s why we’ve launched this joint Ragan and PR Daily column, rounding up the biggest developments in AI that communicators need to know about, with a focus on how it will impact your work and your business.

This edition looks at significant developments in international AI regulation, an onslaught of legal issues for content creators, how HR teams are using AI, and what comms can learn from the recent wave of tech layoffs attributed to the technology.

 

 

Legal issues surrounding AI heat up

The legal issues surrounding generative AI are coming fast. Today, the FTC announced that it will investigate OpenAI, the company behind ChatGPT, over the tool’s inaccuracies and potential harms.  According to the New York Times:

In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data. A group of authors, including comedian Sarah Silverman, are suing both Meta and Open AI over their alleged use of the authors’ works to train large language model systems.

In another example, a group of authors, including comedian Sarah Silverman, are suing both Meta and Open AI over their alleged use of the authors’ works to train large language model systems.

“Indeed, when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works –something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works,” the lawsuit says, according to reporting from Deadline.

This lawsuit will be one to watch as courts will try to determine if creators can bar their content from being fed into AI models. It also serves as a warning to those using these tools: you may inadvertently be using copyrighted material, and liable for misuse.

Several services offering AI imagery are trying to get ahead of that concern by offering protection against lawsuits brought over copyright claims against those who use their AI tools.

Shutterstock is offering human review for copyright concerns, including an expedited option, with full indemnity for clients. Adobe Firefly has taken a different tact, claiming all images the AI is trained on are either public use or are owned by Adobe. It, too, offers full indemnity for users.

All of these issues speak to one of the biggest challenges facing AI: the unsettled questions of ownership and copyright. Expect this space to continue to evolve — and fast.

How international governments are handling AI regulation

Governments are scrambling to adapt as AI technology advances at breakneck speed. Unsurprisingly, different nations are handling the situation in disparate ways.

The EU, known for its strict privacy regulations, is leaning toward a touch set of rules that some business leaders say threatens industry in the bloc.

“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the 160 leaders wrote in a letter, CNN reported. Signers include leaders from Airbus, Renault and Carrefour, among others.

Specifically, the signatories say the regulations could hold the EU back against the U.S. While some Congressional hearings on AI regulations have been held, there are no proposals near passage yet. Meanwhile, EU regulations are being negotiated with member states now, according to CNN. They could include a ban on the use of facial recognition technology and Chinese-style “social scoring systems), enact mandatory disclosure policies for AI-generated content and more.

Meanwhile, fellow technological juggernaut Japan seems more inclined toward a less restrictive, American-style approach to AI than European stringency, according to Reuters.

Journalism’s rocky relationship with AI continues

Newsrooms keep trying to use AI, pledging full fact-checking of the content before publication. And newsrooms keep failing in that promise.

The latest misstep was created by Gizmodo, which published an error-filled timeline of the “Star Wars” cinematic universe, the Washington Post reported. Human staff at Gizmodo were given just 10 minutes warning before the AI story was published, and they quickly found basic factual errors with the story and criticized their employer for a lack of transparency around AI’s role in its creation.

“If these AI [chatbots] can’t even do something as basic as put a Star Wars movie in order one after the other, I don’t think you can trust it to [report] any kind of accurate information,” Gizmodo deputy editor James Whitbrook told the Washington Post.

 

This unforced error is as much a failure of internal communications as external. By not bringing in staff earlier and giving them a chance to ask questions, raise concerns and perform basic fact-checking, Gizmodo owner G/O Media gained powerful critics who were unafraid to speak to the press about their missteps.

 

But there is, of course, also the question of using AI in journalism at all. The International Center for Journalism has compiled a list of questions to ask before using AI to keep audience trust.

Tech companies cite AI as the reason behind massive layoffs

In a move that sci-fi novelists Isaac Asimov and Philip K. Dick saw coming, AI is already replacing jobs in the very industry that created it. Data tracked by Layoffs.fyi shows that more than 212,000 tech workers have been laid off in 2023, already surpassing the 164,709 recorded in 2022.

June saw the trend continue as ed tech company Chegg disclosed in a regulatory filing last month that it was cutting 4% of its workforce “to better position the Company to execute against its AI strategy and to create long-term, sustainable value or its students and investors.”

But the tech industry layoff wave began this past May, when 4,000 people lost work to the technology, including 500 Dropbox employees who were informed via a memo from CEO Drew Houston.

“In an ideal world, we’d simply shift people from one team to another,” wrote Houston. “And we’ve done that wherever possible. However, our next stage of growth requires a different mix of skill sets, particularly in AI and early-stage product development. We’ve been bringing in great talent in these areas over the last couple years and we’ll need even more.”

Houston’s words underscore the importance of including AI training in the learning, development and upskilling opportunities offered at your organizations. To echo an aphorism that has been shared at many a Ragan event over the past year: “AI won’t replace your job, but someone using AI will.”

These words should read less a foreboding warning and more as a call to action. Partner with your HR colleagues to determine how this training can be provided to all relevant employees through specific use cases, personalized in collaboration with the relevant managers. And understand that HR has its own relationship with AI to consider, too. More on that below.

AI can replace the ‘human’ in human resources, but not without risk

HR teams face their own set of legal pitfalls to avoid. New York City’s Automated Employment Decision Tool (AEDT) law, considered the first in the country aimed at reducing bias in AI-driven recruitment efforts, will now be enforced, reports VentureBeat.

“Under the AEDT law, it will be unlawful for an employer or employment agency to use artificial intelligence and algorithm-based technologies to evaluate NYC job candidates and employees — unless it conducts an independent bias audit before using the AI employment tools,” the outlet writes. “The bottom line: New York City employers will be the ones taking on compliance obligations around these AI tools, rather than the software vendors who create them.”

Of course, that isn’t stopping HR teams from leaning into AI more heavily. In late June, Oracle announced it would add generative AI features to its HR software to help draft job descriptions and employee performance goals, reports Reuters.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is executive editor of PR Daily. Follow her on Twitter, LinkedIn or Threads.

Justin Joffe is the Editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more. You can find him on Twitter @joffaloff.

Topics: PR

COMMENT

One Response to “AI for communicators: New developments you need to know”

    Kristen S says:

    Based on the number of typos in this article about the risks of AI replacing writers I thought it was a stunt and at the end was going to say AI wrote the article to demonstrate the inherent fault of AI generated content. If we writers want to promote our profession and need for the human element, we first need to proof, and have an editor review our own work before pushing the button to publish.

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.