AI for communicators: What’s new and what matters

From regulation to new tools and beyond.

AI for communicators


AI continues to shape our world in ways big and small. From new rulings and protections for artists to tools that will help communicators and also aid bad actors, there’s no shortage of big stories.

Here’s what communicators need to know. 

AI risks and regulation

In no surprise to anyone following our updates, AI’s evolution is leading to more regulation to keep the fast-moving tech in check. 

Earlier this week, the estate of late comedian George Carlin settled with podcasters Will Sasso and Chad Kultgen over their comedy special, “George Carlin: I’m Glad I’m Dead,”which was made by training an AI algorithm on five decades of Carlin’s work and posted on YouTube. 

In addition to allegations of copyright infringement, the suit also claimed that the comedians used Carlin’s name and likeness without permission. 

The New York Times reports:

“The world has begun to appreciate the power and potential dangers inherent in A.I. tools, which can mimic voices, generate fake photographs and alter video,” [lawyer for Carlin’s estate Josh] Schiller said in a statement on Tuesday.

He added: “This is not a problem that will go away by itself. It must be confronted with swift, forceful action in the courts, and the A.I. software companies whose technology is being weaponized must also bear some measure of accountability.”

A spokeswoman for Mr. Sasso declined to comment. A spokesman for Mr. Kultgen could not immediately be reached.

Kelly Carlin, George Carlin’s daughter, wrote in a statement that she was pleased that the suit had been resolved so quickly.

“While it is a shame that this happened at all, I hope this case serves as a warning about the dangers posed by A.I. technologies and the need for appropriate safeguards,” Ms. Carlin said.

The 200 musicians who signed an open letter organized by the non-profit Artist Rights Alliance would also agree with Carlin.

The letter, which includes signatures from the likes of Katy Perry, J Balvin, Billie Eilish and Jon Bon Jovi, urges “AI developers, technology companies, platforms and digital music services to cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists.” While it acknowledges AI’s “enormous potential to advance human creativity,” it also claims that many companies use artists’ work irresponsibly to train models that dilute the artists’ royalty pools.

“Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter continues.

Thankfully, the musicians performing on TV or movie scores and those onscreen are winning some regulatory protections. The American Federation of Musicians voted to ratify its new contract with major studios this week, providing streaming residuals and AI protections that codify the provisions secured after the Writers Guild of America and SAG-AFTRA strikes ended last year

According to Variety:

“This agreement is a monumental victory for musicians who have long been under-compensated for their work in the digital age,” said Tino Gagliardi, the union’s international president, in a statement.

On AI, the union got a stipulation that musicians are human beings. The agreement allows AI to be used to generate a musical performance, with payment to musicians whose work is used to prompt the AI system.

“AI will be another tool in the toolbox for the artistic vision of composers, and musicians will still be employed,” said Marc Sazer, vice president of AFM Local 47, in an interview. “They cannot produce a score without at least a human being.”

Treating AI as “another tool in the toolbox” is a great way to ensure that you’re preserving and maintaining human agency while automating certain tasks, and this ruling is a reminder that any collaborative policies you set with the creatives you work with (be they influencers, freelancers or full-timers) would do well to include context on how AI tools will or won’t be used to augment their work. 

Remember, communicators who get started now around  crafting internal use guidelines and governance around AI will be one step ahead when federal regulations are finally codified. 

This week, The U.S. Department of Commerce announced a partnership between the U.S. and U.K. AI Safety Institutes that will see them sharing research, safety evaluations and guidance on AI safety as they agree on processes for evaluation AI models, systems and agents.

This partnership will include at least one joint testing exercise on a publicly accessible model not named in the press release. 

“This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” said U.S. Secretary of Commerce Gina Raimondo.  “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.” 

It’s unclear how this partnership connects back to past reports of global governmental collaboration on AI regulation, but the timing of the announcement — between the ratification of a landmark European AI law last month and its expected enforcement in May — is a smart accountability play by the US government, which has been moving more slowly than other countries on AI regulation and will see U.S. companies that operate in EU regions held accountable by some of the EU’s global standards.

While matters of safety and security are no doubt of high interest to your employees and external stakeholders alike, the larger push for regulation also ties back to which large language models (LLMs) are used most often – and raises questions about the dominance of the companies producing them.

Appearing with Jon Stewart on The Daily Show this past Monday, FTC Chair Lina Kahn touched on her push for anti-trust reform and took a shot at Apple after Stewart revealed Apple asked him not to have her on his podcast. 


“I think it just shows the danger of what happens when you concentrate so much power and so much decision-making in a small number of companies,” Khan said.

Keep that in mind as we look at Apple’s newest AI innovation. 

Tools and use cases 

Apple’s newest venture is ambitious, set to take on nothing less than industry leader ChatGPT. Reference Resolution As Language Modeling, or ReaLM, is anticipated to power Siri and other virtual assistants. What makes it stand out, according to Business Insider, is its superior ability to interpret context. 

For example, let’s say you ask Siri to show you a list of local pharmacies. Upon being presented with the list, you might ask it to “Call the one on Rainbow Road” or “Call the bottom one.” With ReaLM, instead of getting an error message asking for more information, Siri could decipher the context needed to follow through with such a task better than GPT-4 can, according to the Apple researchers who created the system.

ReaLM also excels in understanding images with embedded text, which can make it handy for pulling phone numbers or recipes out of uploaded images, BI reports. It’s all a big move for a tech player which has been widely seen as lagging the major industry players. Will it be enough? 

It’s clear the battle for users, especially in the business space, will be intense. Amazon is luring startup users to its cloud products by offering free credits for AI tools – even competitive products. But even though the credits can be used for other tools, Amazon made it clear to Reuters that its end goal is to build market share for its Bedrock platform, including Anthropic. 

“That’s part of the ecosystem building. We are unapologetic about that,” said Howard Wright, vice president and global head of startups at Amazon Web Services. Expect more brands to offer incentives as these wars really heat up.

Reuters also reports that Yahoo has made its own big investment in AI-powered news through its purchase of Artifact, created by the co-founders of Instagram. The news recommendation platform will help Yahoo serve more personalized content to visitors to its websites, a trend we’re certain to see more of in the future. AI’s capability to determine exactly what humans want and deliver it seamlessly represents a new era for content marketing.

Another major update for anyone who works with content is a new feature in OpenAI’s DALL-E image generator that allows users to edit pictures with conversational prompts. ZDNet reports that images can be edited either by using a  tool to identify the areas that should be altered and then typing a prompt, or simply by writing prompts, such as “make it black and white.” For those in the comms space who lack graphic design skills, this could be a major leap forward. But of course, it also comes with risks, like all AI tools at the moment. 

In a move that is all but certain to deepen problems with deepfakes even as it presents unique new opportunities for good actors, OpenAI has announced a tool that CNN says can recreate human voices with “startling accuracy.” Voice Engine needs just a 15-second recording of a person’s voice to convincingly mimic it. OpenAI says the tool can help with translation, reading assistance or speaking assistance for people who cannot talk – but it also recognizes the potential for misuse. 

“Any broad deployment of synthetic voice technology should be accompanied by voice authentication experiences that verify that the original speaker is knowingly adding their voice to the service and a no-go voice list that detects and prevents the creation of voices that are too similar to prominent figures,” OpenAI said in a statement shared with CNN. 

But others aren’t waiting for organizations like OpenAI to solve the misinformation problem – they’re acting now. 

University of Washington professor and founding chief executive of the Allen Institute for A.I Oren Etzioni spearheads an organization called TrueMedia.org which has released a free suite of tools for journalists, fact checkers and others (like you!) who are trying to parse truth from fiction amid the explosion of AI. The tools give confidence assertions about how likely an image or video is to be created by AI. It’s a helpful tool, but even Etzioni warns of its limitations.

We are trying to give people the best technical assessment of what is in front of them,” Etzioni said “They still need to decide if it is real.”

AI at work

One of the most common talking points for companies looking to invest in AI is its potential to streamline efficiency and productivity for mundane tasks. But several economic experts aren’t so sure, according to the New York Times:

But many economists and officials seem dubious that A.I. — especially generative A.I., which is still in its infancy — has spread enough to show up in productivity data already.

Jerome H. Powell, the Federal Reserve chair, recently suggested that A.I. “may” have the potential to increase productivity growth, “but probably not in the short run.” John C. Williams, president of the New York Fed, has made similar remarks, specifically citing the work of the Northwestern University economist Robert Gordon.

Mr. Gordon has argued that new technologies in recent years, while important, have probably not been transformative enough to give a lasting lift to productivity growth.

“The enthusiasm about large language models and ChatGPT has gone a bit overboard,” he said in an interview.

Of course, that’s not stopping large organizations from exploring productivity gains that the tech can bring. The story goes on to share details of how Walmart, Macy’s, Wendy’s and other brands are using AI internally across comms, marketing and logistics functions.

The piece notes that Walmart’s “My Assistant” section of its employee app uses AI to answer questions about benefits, summarize meetings and draft job descriptions:

The retailer has been clear that the tool is meant to boost productivity. In an interview last year, Donna Morris, Walmart’s chief people officer, said one of the goals was to eliminate some mundane work so employees could focus on tasks that had more impact. It’s expected to be a “huge productivity lift” for the company, she said.

This positioning of the tech as a means to eliminate mundane work tracks with how AI is often positioned as a partner, not a replacement — but this won’t be the case for employees working in jobs that involve physical labor. 

Re-Up, an AI-powered convenience store chain, announced its integration of Nala Robotics’ autonomous fry-cooking station, dubbed “The Wingman,” at several of its locations.

“The integration of robotics kitchens stands as a pivotal strategy in our modernization initiative, enabling us to enhance operational efficiency and deliver seamless services while upholding unwavering quality standards around the clock,”  Narendra Manney, co-founder & president of Re-Up said in the press release.

“The Wingman doesn’t get sick, can work around the clock and can cook any dish efficiently all the time, improving on quality and saving on labor costs,” said Ajay Sunkara, CEO of Nala Robotics. “At the same time, customers get to choose from an assortment of great-tasting food items just the way they like it.”

When communicating with employees, consider that they’re seeing these stories – and they’re worried. Work with senior leaders on the language they use to minimize how often they frame automation as business efficiencies without considering the people behind the labor savings. Fries may be just as tasty cooked by a robot, but the fear that such development may instill in your employees doing more menial tasks is still something to get out in front of. 

No wonder the Wall Street Journal reports that top M.B.A. programs at American University and Wharton are training students to think about how AI will automate tasks in their future careers:

American’s new AI classwork will include text mining, predictive analytics and using ChatGPT to prepare for negotiations, whether navigating workplace conflict or advocating for a promotion. New courses include one on AI in human-resource management and a new business and entertainment class focused on AI, a core issue of last year’s Hollywood writers strike. 

Officials and faculty at Columbia Business School and Duke University’s Fuqua School of Business say fluency in AI will be key to graduates’ success in the corporate world, allowing them to climb the ranks of management. Forty percent of prospective business-school students surveyed by the Graduate Management Admission Council said learning AI is essential to a graduate business degree—a jump from 29% in 2022. 

This integration of AI education at some of the country’s top business schools should serve as a call to communicators to explore the learning and development opportunities for employees who would benefit from upskilling on AI as part of their career trajectory.

Ultimately, this trend is another reminder that your work goes beyond crafting use cases, guidelines and best practices. Partnering with HR and people managers to see what AI training is available for the top talent in your industry, then positioning that training as a core tenet of your employer brand, will ensure your organization remains competitive and primed for the future. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.