By Julia Stratton

The Carleton Challenge Conference tackled one of the central challenges of artificial intelligence (AI) — the tension between its immense possibilities and potential peril — during a panel discussion exploring the ethics, policy, governance and risk dimensions of AI.

When thinking about the risks and ethics surrounding the technology, researchers contemplate the hypothetical AI paperclip factory, explains Mary Kelly, professor of Cognitive Science at Carleton University.

Although paperclip manufacturing “sounds innocuous,” the dystopian fear is that a factory run by AI could be so driven by the quest for productivity that it will start ignoring obvious ethical boundaries to maximize output. In what Kelly calls “a very fanciful scenario,” the AI factory “starts regarding humans as potential paperclip material.”

“The paperclip AI is not malevolent in that way that a human is malevolent. It’s malevolent in a way that … it doesn’t even understand that humans are something that can be harmed.”

Five people sitting on a panel discussion during a conference to discuss AI and risk.

Panelists at the 2025 Carleton Challenge Conference — The AI Summit: Navigating Disruption and Transformation — discuss the ethics, policy, governance and risk dimensions of artificial intelligence: (left to right) Carleton University Cognitive Science Professor Mary Kelly, Jordan Zed of the AI Secretariat at the Privy Council Office of Canada, Kathleen Fraser of the National Research Council of Canada, Kate Purchase, Microsoft’s senior director for International AI Governance and moderator Allan Thompson, director of Carleton’s School of Journalism and Communication.

Humans, on the other hand, understand this.

“We have empathy,” notes Kelly. AI engineers are “trying to replicate the capacities of the human brain. As far as we’ve come — and we have come very far in these past few years — there’s still big differences between what the human brain can do and AI can do.”

She adds that AI technologies “first have to be able to reason about the existence of human minds and be able to predict what humans want or need in order to be safe.”

“Right now, they’re not good at it,” says Kelly.

That’s why, the panelists agreed, that when developing and implementing AI technology in Canada, it’s essential that its human creators guide innovation with human rights, fairness and public safety in mind. Part of the challenge is ensuring that policy and government regulations keep pace with the rapidly changing technology.

Three people having a conversation during a panel discussion.

Guardrails, Governance, and Canada’s AI Opportunity

Still, countries around the world are rushing to operationalize the world-changing power of AI even as guardrails are being erected to ensure its safe application. That’s why, while conference-goers were gathered at Carleton to explore the challenges confronting Canada in the age of AI, the Canadian government was naming its first federal Minister of Artificial Intelligence and Digital Innovation — former CBC and CTV broadcaster Evan Solomon.

“Even though there’s such a strong base of AI research in this country,” said Jordan Zed of the AI Secretariat at the Privy Council Office of Canada, there’s “an opportunity to do much more, to think about how we can be leaders in this space, to have greater coherence across the government, across the country — and bring a clear message to our engagements internationally.”

When discussing risks, people often think about the negative implications of AI – out-of-control machines or powerful technologies in unethical hands. But Kate Purchase, senior director for international AI governance at Microsoft, stressed that it’s also important to keep innovating and implementing AI technology because “there is a risk of being left behind,” as well.

She said people often assume that the countries that develop the largest and best language models are going to be the ones that win the AI race. Rather, “it’s about who adopts it fastest, and that’s who ultimately see the greatest gains.”

There seem to be major advancements in AI every week, noted Zed. Years of work have gone into establishing ethical principles for AI and transparency around its development. However, a major barrier to implementing AI remains the divided opinion within Canada and in other countries about what the principles should be and how rules around AI should be applied.

“It will be imperfect, it will be fragmented,” Zed predicted.

“That only means the opportunity is even greater for countries like Canada to help navigate this space and to provide the leadership and bridge some of these divides.”

As Canada is the president of the G7 this year — and will host a summit of world leaders June 15-17 in Kananaskis, Alberta — Zed said this country can play a leadership role.

“AI will almost certainly figure prominently into the discussions that take place next month.”

A panel discussion and the audience sitting in front of them.

The Bias Beneath: AI’s Blind Spots and Social Impact

Alongside excitement about the profitability and future use of AI, a key consideration is mitigating the bias that artificial intelligence systems have learned from being trained on information that perpetuates stereotypes and discrimination in society.

For example, if you ask AI for a picture of a scientist, it will spit out an image of an old white man with wild hair and glasses, said Kathleen Fraser, research officer at the National Research Council of Canada and adjunct professor of Computer Science at Carleton. Likewise, AI can reinforce systems that further entrench racial, gender and other biases that harm marginalized groups and run counter to Canada’s values of human rights.

Fraser explained that you can ask a large language model how it arrived at an answer, and it will give you an explanation that sounds reasonable – but is not necessarily related to the underlying mechanisms and processes that led it to that answer.

“Sometimes the patterns are because of real knowledge or information that’s there, and sometimes the patterns are because of systemic historical biases, and sometimes the patterns are just random,” she said.

Many are also concerned about the concentration of wealth in the hands of the few who build and control new AI technologies, rather than distributing the profits — and benefits — more equitably throughout society.

“It’s really important that as this technology sees greater use…we all benefit from it,” Kelly said. “That we have more leisure time and more comfortable lives, rather than AI just being used as a tool for extracting more value from fewer labourers and lining the pockets of the very wealthy.”

Although many are concerned about the risks of AI in the future, Fraser said that when considering policy and governance, we need to remember to focus on the issues that are already here.

“One of the biggest misconceptions that I see around AI is this idea that all the risks of AI might happen at some point in the distant future,” she said.

“I think the risks are here now.”

A woman wearing glasses holds her hand to her face while engaging in conversation.


2025 Carleton Challenge Conference Recap

Thursday, May 15, 2025 in
Share: Twitter, Facebook