Generative AI is a tool. Like a calculator or a hammer, it is morally neutral.
But when humans get involved, so does morality. We can choose to use those tools to brainstorm new campaign ideas, calculate our donations to charity or build a house.
Equally, we can choose to use those tools in a way that compromises data, slashes budgets or destroys a wall.
It’s all up to the people who wield them.
Miri Rodriguez, a senior storyteller at Microsoft, sees a chance to blaze a new trail when it comes to creating ethical guardrails around AI.
“The opportunity and the responsibility is ours to stop and think, how can we get ahead of this on time instead of ignoring it or bypassing it?” she said.
Think of when social media burst onto the scene. There were no rules. No guidelines. Everything had to be done from scratch to protect both people and organizations.
This is a similar moment, Rodriguez said.
“While we may want to control (AI) in some way, it really is not to be controlled in the way that we think,” she explained. “The best way to really leverage it is to create those guidelines and those policies around it and how it serves specific audiences.”
Rodriguez counsels sitting down with both leaders and constituents and gauging their feelings toward generative AI. And once you know how to proceed, she suggests thinking of AI as a “smart intern” who can be molded in a variety of directions.
“We’re teaching it.” she said, “and it’s our responsibility to do that and to come in with that piece of knowledge to say, ‘I’m going to build a relationship with this machine to help it help me.’”
This full story, containing ethical frameworks for AI, is available exclusively to members of the Communications Leadership Council. For more information on how to join and access additional resources, click here.