As the first month of the new year ends, there is no shortage of AI news for communicators to catch up on.
This week, we’ll look at the growing threat of AI deepfakes, clarity on how Washington’s seemingly glacial measures to regulate AI for businesses will apply in practice, along with new tools, initiatives and research that can foster a healthy and non-dystopian future with your AI partners, both in and out of work.
Many of the fears about deepfakes and other deceptive uses of AI came home to roost in the past few weeks. Most notably, X was flooded with non-consensual, explicit AI-generated photos of Taylor Swift. There was so much content that the social media platform temporarily removed the ability to search for the star’s name in an attempt to dampen its reach.
The scale and scope of the deepfakes – and Swift’s status as one of the most famous women in the world – catapulted the issue to the very highest echelons of power. “There should be legislation, obviously, to deal with this issue,” White House press secretary Karine Jean-Pierre said.
Microsoft CEO Satya Nadella cited the incident as part of a need for “all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced. And there’s a lot to be done and a lot being done there,” Variety reported.
But the problem extends far beyond any one person. Entire YouTube ecosystems are popping up to create deepfakes that spread fake news about Black celebrities and earn tens of millions of views in the process.
Outside of multimedia, scammers are scraping content from legitimate sites like 404 Media, rewriting it with generative AI, and re-posting it to farm clicks, sometimes ranking on Google above the original content, Business Insider reported. Unscrupulous people are even generating fake obituaries in an attempt to cash in on highly searched deaths, such as a student who died after falling onto subway tracks. The information isn’t correct, and it harms grieving families, according to Business Insider.
That pain is real, but on a broader level, this fake content also threatens the bedrock of the modern internet: quality search functions. Google is taking action against some of the scammers, but the problem is only going to get worse. Left unchecked, the problem could alter the way we find information on the internet and deepen the crisis of fake news.
And unfortunately, the quality of deepfakes keeps increasing, further complicating the ability to tell truth from fiction. Audio deepfakes are getting better, targeting not only world leaders like Joe Biden and Vladimir Putin, but also more minor figures like a high school principal in Maryland.
These clips reanimate the dead and put words into their mouths, as in the case of an AI-generated George Carlin. They are also coming for our history, enabling the creation of authentic-seeming “documents” from the past that can deeply reshape our present by stoking animus.
It’s a gloomy, frightening update. Sorry for that. But people are fighting to help us see what’s real, what’s not and how to use these tools responsibly, including a new initiative to help teens better understand generative AI. And there are regulations in motion that could help fight back.
Regulation and government oversight
“The Order directed sweeping action to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more,” the statement reads.
The statement goes on to explain the convening of a White House AI council, which will include top federal officials from a range of departments and agencies. These agencies have completed all of the 90-day actions they were tasked with and made progress toward other, long-term directives.
“Taken together, these activities mark substantial progress in achieving the EO’s mandate to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond,” the statement continues.
Regulatory steps taken to mitigate safety and security risks include:
- Activating the Defense Production Act to require that AI systems developers report “vital information” like AI safety test results to the Department of Commerce.
- A proposed rule from the Department of Commerce would require U.S. cloud computing companies to report if they are providing AI training to foreign clients.
- Risk assessments around AI’s use in critical infrastructure sectors. These were conducted by nine agencies including the Department of Defense, the Department of Transportation, the Department of Treasury and the Department of Health and Human Services.
Focusing on the mandated safety tests for AI companies, ABC News reports:
The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests. The government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.
Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants “to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”
Regulatory steps to “innovate AI for good” include:
- The pilot launch of the National AI Research Resource, managed by the U.S. National Science Foundation as a catalyst for building an equitable national infrastructure to deliver data, software, access to AI models and other training resources to students and researchers.
- The launch of an AI Talent Surge program aimed at hiring AI professionals across the federal government.
- The start of the EducateAI initiative, aimed at funding AI educational opportunities for K-12 and undergraduate students.
- The funding of programs aimed at advancing AI’s influence in fields like regenerative medicine.
- The establishment of an AI Task Force specific to the Department of Health and Human Services will develop policies and bring regulatory clarity to how these policies can jumpstart AI innovation in healthcare.
While the previous executive order offered suggestions and recommendations, these directives on AI mark the first tangible set of actions and requirements issued by the Biden-Harris administration. As the ABC coverage notes, however, the absence of a common standard for evaluating these systems for safety still leaves many questions.
For now, communicators can take inspiration from the style and structure of this fact sheet – note the chart summarizing specific actions of agencies, even though the text is too small to read without zooming in.
Expect to hear more in the coming weeks about what AI business leaders learn from these safety and security mandates. Clarity and transparency on these processes may be slow coming, but these requirements amount to progress nonetheless.
Because this regulation may also shed light on how certain companies are safeguarding your data, what we learn can also inform which programs and services your comms department decides to invest in.
Tools and initiatives
China put its AI building into overdrive, pumping out 40 government-approved large language models (LLMs) in just the last six months, Business Insider reported, including 14 LLMs in the past week.
Many of the projects come from names known in the U.S. as well: Chinese search giant Baidu is the dominant force, but cellphone makers Huawei and Xiaomi are also making a splash, as is TikTok owner Bytedance. Bytedance caused controversy by allegedly using ChatGPT to build its own rival model, and creating a generative audio tool that could be responsible for some of the deepfakes we discussed earlier.
It’s unclear how much traction these tools might get in the U.S.: Strict government regulations forbid these tools from talking about “illegal” topics, such as Taiwan. Additionally, the U.S. government continues to put a damper on Chinese AI ambitions by hampering the sale of semiconductors needed to train these models. But these Chinese tools are worth watching and understanding as they serve one of the biggest audiences on the planet.
Yelp, long a platform that relied on reviews and photos from real users to help customers choose restaurants and other services, will now draw from those reviews with an AI summary of a business, TechCrunch reported. In an example screenshot, a restaurant was summarized as: “Retro diner known for its classic cheeseburgers and affordable prices.” While this use of AI can help digest large amounts of data into a single sentence, it could also hamper the human-driven feel of the platform in the long run.
Copyright continues to be an overarching – and currently unsettled – issue in AI. Some artists are done waiting for court cases and are instead fighting back by “poisoning” their artwork in the virtual eyes of AI bots. Using a tool called Nightshade, artists can use an invisible-to-humans tag that confuses AI, convincing them, for instance, that an image of a cat is an image of a dog. The purpose is to thwart image-generation tools that learn on artwork they do not have own the copyright for – and to put some control back into the hands of artists.
Expect to see more tools like this until the broader questions are settled in courts around the world.
AI at work
There’s no shortage of research on how AI will continue to impact the way we work.
A recent MIT Study, “Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?” suggests that AI isn’t replacing most jobs yet because it hasn’t been a cost-effective solution to adopt across an enterprise.
“While 36% of jobs in U.S. non-farm businesses have at least one task that is exposed to computer vision,” the study reads, “only 8% (23% of them) have a least one task that is economically attractive for their firm to automate.”
“Rather than seeing humans fade away from the workforce and machines lining up, I invite you to envision a new scenario,” AI expert, author, and President/CEO of OSF Digital Gerard “Gerry” Szatvanyi told Ragan in his read on the research.
“Instead, picture increased efficiency leading to higher profits, which might be reinvested in technology, used to raise worker wages, or applied to training programs to re-skill employees. By and large, employees will enjoy the chance to learn and grow because of AI.”
A recent Axios piece supports Szatvany’s vision, with reporter Neil Irwin identifying a theme emerging in his conversations with business leaders: “That AI-driven productivity gains are the world’s best hope to limit the pain of a demographic squeeze”:
“The skills required for every job will change,” Katy George, chief people officer at McKinsey & Co., told Axios. The open question, she said, is whether “we just exacerbate some of the problems that we’ve seen with previous waves of automation, but now in the knowledge sector, as well.”
While avoiding a “demographic squeeze” is a noble goal, focusing on the use cases that can streamline productivity and improve mental health continues to be a practical place to start. One organization answering this call is Atrium Health, which launched a pilot AI program focused on improving operational efficiency and minimizing burnout for healthcare professionals. Its DAX Copilot program can write patient summaries for doctors as they talk -– provided the patient has given consent.
“I have a draft within 15 seconds and that has sifted through all the banter and small talk, it excludes it and takes the clinical information and puts it in a format that I can use,” Atrium senior medical director for primary care Dr. Matt Anderson told WNC Charlotte.
It’s worth noting that this industry-specific example of how AI can be used to automate time-consuming tasks doesn’t negate Dr. Anderson’s skills, but allows him to demonstrate them and give full attention to the patient.
Remember, AI can also be used to automate other industry-agnostic tasks beyond note-taking. Forbes offers a step-by-step guide for applying AI to spreadsheets for advanced data analysis using ChatGPT’s data analyst GPT. You can ask the tool to pull out insights that might not be obvious, or trends that you wouldn’t identify on your own.
As with any AI use case, the key is to ask good questions.
Learning these kinds of AI skills across multiple tools can help you grow into an AI generalist, but those hoping to transition into AI-specific roles will want a specialist who understands the nuances of specific and proprietary tools, too, according to Mike Beckley’s recent piece on FastCompany:
“People want to move fast in AI and candidates need to be able to show that they have a track record of applying the technology to a project. .While reading papers, blogging about AI, and being able to talk about what’s in the news shows curiosity and passion and desire, it won’t stack up to another candidate’s ability to execute. Ultimately, be ready to define and defend how you’ve used AI.”
This should serve as your latest reminder to start experimenting with new use cases.. Focus on time and money saved, deliverables met, and how AI helps you get there. You got this.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Justin Joffe is the editorial director editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.