AI for communicators: What’s new and what’s next

Plenty of new regulation and novel uses for AI.

AI in comunications


Even in December, traditionally a slow time for news, the AI whirlwind doesn’t stop. From new uses of AI ranging from fun to macabre and increasing government interest in regulating these powerful tools, there’s always more to learn and consider.

Here are some of the biggest stories from the last two weeks – and what they mean for communicators. 

The latest in regulation

Last Friday, European Union policymakers codified a massive law to regulate AI that the New York Times callsone of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.”

Included in the law are new transparency rules for generative AI tools like ChatGPT, such as labels identifying manipulated images and deepfakes.

How comprehensive and effective the law will be remains to be seen. Many aspects of the law would not be enforced for a year or two, which is a considerable length of time when attempting to regulate a technology that’s advancing at the rate of AI. 

 

 

Moreover, Axios reports that some US lawmakers, including Chuck Schumer, have expressed concerns that if similar regulations were adopted in the U.S., it could put America at a competitive disadvantage over China. 

The EU’s law also allows the use of facial recognition software by police and governments in certain matters of safety and national security, which has some organizations like Amnesty International questioning why the law didn’t ban facial recognition outright. 

Considering how the EU’s General Data Protection Rule set a global precedent in 2016 for the responsible collection of audience and customer data, influencing domestic laws like the California Consumer Privacy Act, it’s reasonable to assume that this AI law may set a similar global precedent. 

Meanwhile, Washington is still mulling over regulations, but once again more slowly than its global colleagues. 

Biden’s White House AI council met for the first time Tuesday to discuss how it would implement the recommendations in a comprehensive executive order published back in October. 

The Hill reports:

The group, which included members of the Cabinet… also discussed ways to bring talent and expertise into the government, how to safety test for new models, and ways to prevent risks associated with AI — such as fraud, discrimination and privacy risks, according to the official.  

The group also discussed the new U.S. Artificial Intelligence Safety Institute, announced by the Department of Commerce’s National Institute of Standards and Technology (NIST) last month.

The order also included new standards for safety and for reporting information to the federal government about the testing, and subsequent results, of models that pose risks to national security, economic security or public health.  

 

Though the White House says the council will meet regularly, a month and a half gap between when the order was released and the first meeting doesn’t instill confidence that The White House is moving to address AI regulation at a pace consummate with the speed at which the tech is evolving. 

Of course, some Washington agencies are setting precedents that could be included and applied to a larger regulatory framework. This week, the U.S. Copyright Office (USCO) refused to register an AI-generated image, marking the fourth time the office has not registered AI-generated work. 

“The USCO’s analysis focuses on issues such as lack of human control, contradictory descriptions of the tool (such as whether it is a filter or a more robust generative tool), and whether the expressive elements of the work were human authored,” reports IP Watchdog.

As the White House has other partners in Washington, like the USCO, the council should  coordinate with the copyright office to name and integrate these precedents into its larger strategy, 

While Washington may be slower to coordinate its strategy and codify regulation into law, you can still take inspiration and cues from the EU’s imminent legislation in creating your own brand guidelines – especially if you have audiences, customers or other stakeholders based in those countries. 

Tools and uses

More and more new uses for AI are rolling out weekly, each seemingly more sophisticated than the last. These go far beyond merely generating text and into something that begins to feel truly sci-fi.

For instance, visitors to Paris’s Musée D’Orsay can now chat with an AI version of Vincent Van Gogh. The New York Times reported that the artificially intelligent recreation of the painter uses a microphone to converse with visitors about his paintings – but perhaps most notably, his death by suicide.

Hundreds of visitors have asked that morbid question, museum officials said, explaining that the algorithm is constantly refining its answers, depending on how the question is phrased. A.I. developers have learned to gently steer the conversation on sensitive topics like suicide to messages of resilience.

“I would implore this: cling to life, for even in the bleakest of moments, there is always beauty and hope,” said the A.I. van Gogh during an interview.

The program has some less oblique responses. “Ah, my dear visitor, the topic of my suicide is a heavy burden to bear. In my darkest moments, I believed that ending my life was the only escape from the torment that plagued my mind,” van Gogh said in another moment, adding, “I saw no other way to find peace.”

While the technology is certainly cool, the ethics of having a facsimile of a real human discuss his own death – his thoughts on which we cannot truly know – are uncomfortable at best. Still, it’s clear there could be a powerful educational tool here for brands, albeit one that we must navigate carefully and with respect for the real people behind these recreations.

AI voice technology is also being used for a tedious task: campaign calling. “Ashley” is an artificial intelligence construct making calls for Shamaine Daniels, a candidate for Congress from Pennsylvania, Reuters reported

Over the weekend, Ashley called thousands of Pennsylvania voters on behalf of Daniels. Like a seasoned campaign volunteer, Ashley analyzes voters’ profiles to tailor conversations around their key issues. Unlike a human, Ashley always shows up for the job, has perfect recall of all of Daniels’ positions, and does not feel dejected when she’s hung up on.

Expect this technology to gain traction fast as we move into the big 2024 election year, and to raise ethical issues – what if an AI is trained to seem like it’s calling from one candidate, but is actually subtly steering people away with distortions of stances? It’s yet another technology that can both intrigue and repulse.

In slightly lower stakes news, Snapchat+ premium users can create and send AI-generated images based on text prompts to their friends, TechCrunch reported. ZDNET reported that Google is also allowing users to create AI-based themes for its Chrome browser, using broad categories – buildings, geography – that can then be customized based on prompts. It’s clear that AI is beginning to permeate daily life in ways big and small. 

Risks

Despite its increasing ubiquity, we’ve still got to be wary of how this technology is used to expedite communications and content tasks. That’s proven by Dictionary.com’s word of the year: Hallucinate. As in, when AI tools just start making things up but say it so convincingly, it’s hard not to get drawn in. 

Given the prevalence of hallucinations, it might concern you that the U.S. federal government reportedly plans to heavily rely on AI, but lacks a clear plan for how exactly it’s going to do that – and how it will keep citizens safe from risks like hallucinations. That’s according to a new report put together by the Government Accountability Office.

As CNN reports:

While officials are increasingly turning to AI and automated data analysis to solve important problems, the Office of Management and Budget, which is responsible for harmonizing federal agencies’ approach to a range of issues including AI procurement, has yet to finalize a draft memo outlining how agencies should properly acquire and use AI.

“The lack of guidance has contributed to agencies not fully implementing fundamental practices in managing AI,” the GAO wrote. It added: “Until OMB issues the required guidance, federal agencies will likely develop inconsistent policies on their use of AI, which will not align with key practices or be beneficial to the welfare and security of the American public.”

The SEC is also working to better understand how investment companies are using AI tools. The Wall Street Journal reports that the agency has conducted a “sweep,” or a request for more information on AI use among companies in the financial services industry. It’s asking for more information on “AI-related marketing documents, algorithmic models used to manage client portfolios, third-party providers and compliance training,” according to the Journal. 

Despite the ominous name, this doesn’t mean the SEC suspects wrongdoing. The move may be related to the agency’s plans to roll out broad rules to govern AI use. 

But the government is far from the only entity struggling with how to use these tools responsibly. Chief information officers in the private sector are also grappling with ethical AI use, especially when it comes to mitigating the bias inherent in these systems. This article from CIO outlines several approaches, which you might incorporate into your organization or share with your IT leads. 

AI at work

Concerns that AI will completely upend the way we work are already coming to bear, with CNN reporting that Spotify’s latest round of layoffs (its third this year) was conducted to automate more of its business functions – and that stock prices are up 30% because of it.

But concerns over roles becoming automated are just one element of how AI is transforming theworkplace. For communicators, the concerns over ethical content automation got more real this week after The Arena Group, publisher of Sports Illustrated, fired the magazine’s CEO Ross Levinsohn following a scandal over the magazine using AI to generate stories and even authors.

NBC News reports:

A reason for Levinsohn’s termination was not shared. The company said its board “took actions to improve the operational efficiency and revenue of the company.”

Sports Illustrated fell into hot water last month after an article on the science and tech news site Futurism accused the former sports news giant of using AI-generated content and author headshots without disclosing it to their readers.

The authors’ names and bios did not connect to real people, Futurism reported.

When Futurism asked The Arena Group for comment on the use of AI, all the AI-generated authors disappeared from the Sports Illustrated website. The Arena Group later said the articles were product reviews and licensed content from an external, third-party company, AdVon Commerce, which assured it that all the articles were written and edited by humans and that writers were allowed to use pen names.

Whether that scandal is truly the reason for Levinsohn’s termination, it’s enough to suggest that even the leaders at the top are accountable for the responsible application of this tech.

That may be why The New York Times hired Zach Steward as the newsroom’s first-ever editorial director of Artificial Intelligence Initatives.

In a letter announcing his role, The Times emphasizes Steward’s career as founding editor of digital business outlet Quartz, along with his past roles as a journalist, chief product officer, CEO and editor-in-chief. 

Steward will begin by expanding on the work of various teams across the publication over the past six months to explore how AI can be ethically applied to its products. Establishing newsroom principles for implementing AI will be a top priority, with an emphasis on having stories reported, written and edited by human journalists. 

The letter asks, “How should The Times’s journalism benefit from generative A.I. technologies? Can these new tools help us work faster? Where should we draw the red lines around where we won’t use it?”

Those of us working to craft analogous editorial guidelines within our own organizations would be wise to ask similar guiding questions as a starting point. Over time, how the publication enacts and socializes these guidelines will likely also set similar precedents for other legacy publications. Those are not only worth mirroring in our own content strategies but understanding and acknowledging in your relationships with reporters at those outlets, too. 

Unions scored big workforce wins earlier this year when the WGA and SAG-AFTRA ensured writers and actors would be protected from AI-generated scripts and deepfakes. The influence of unions on responsible implementation of AI at work will continue with a little help from Microsoft.

 Earlier this week, Microsoft struck a deal with The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) union federation, which represents 60 unions, to fold the voice of labor into discussions around responsible AI use in the workplace.

According to Microsoft: 

This partnership is the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Building upon the historic neutrality agreement the Communications Workers of America Union (CWA) negotiated with Microsoft covering video game workers at Activision and Zenimax, as well as the labor principles announced by Microsoft in June 2022, the partnership also includes an agreement with Microsoft that provides a neutrality framework for future worker organizing by AFL-CIO affiliate unions. This framework confirms a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.

There are lessons to be gleaned from this announcement that reverberate even if your organization’s workforce isn’t unionized. 

By partnering with an organization that reflects the interests of those most likely to speak out against Microsoft’s expanding technologies and business applications, the tech giant holds itself accountable and has the potential to transform some activists into advocates. 

Consider engaging those who are most vocal against your applications of AI by folding them into formal, structured groups and discussions around what its responsible use could look like for your business.   Doing so now will only ensure that any guidelines and policies truly reflect the interests, concerns and aspirations of all stakeholders.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.