By Allison Young

Elena Fersman embraces a scenario that some artificial intelligence (AI) pessimists would likely describe as a nightmare: self-automated telecommunication networks created by AI systems tapping each other for ideas.

Fersman, vice-president of Swedish-based telecom giant Ericsson and head of the company’s Global AI Accelerator, closed this year’s Challenge Conference at Carleton University by offering glimpses of the next phase of AI’s evolution.

Ericsson has been partnered with Carleton for more than five years, noted Rafik Goubran, vice-president (Research and International), when introducing Fersman’s closing keynote. He praised the “collaborative effort to drive innovation, train skilled workers and build more reliable, secure technology for the future of 5G wireless communications.”

A woman wearing a green blazer speaks into a camera.

Ericsson vice-president Elena Fersman, head of the company’s Global AI Accelerator, delivered the closing keynote address at this year’s Carleton Challenge Conference —The AI Summit – Navigating Disruption and Transformation.

He added that Carleton has more than 100 researchers working on AI-related projects, contributing to a record-breaking year in which the university drew $113 million in sponsored research funding.

Fersman’s talk, AI and Telecom: Evolving Architectures and Operational Integration, focused on the integrating of AI into telecommunication networks. Fersman sees the future of this integration as having telecom networks fully self-automated through the use of AI.

“I don’t want to have language, as we know it,” said Fersman.

“I want to have very efficient and real-time communication between things.”

Many people have adopted conversational chatbots into their daily lives, she noted. These AI chatbots rely on certain rules and human input to generate content, but Fersman said she’s thinking well beyond this.

A woman delivers a presentation with PowerPoint on a stage.

The Rise of Agentic AI: Networks That Think Together

What she describes is agentic AI, an emerging technology in the rapidly evolving AI sphere.

While Fersman says she hears about agentic AI “every second word” in her own community, she was pleased to hear it being mentioned by others at the Carleton conference.

Agentic’s emergence is revolutionizing the way we apply AI, according to Fersman. Rather than relying on human input like the traditional chatbots using Large Language Models, agentic AI is able to reason and orchestrate other AI agents – creating its own workforce.

“When one is triggered and it doesn’t know completely how to address the task, it can trigger a friend,” explains Fersman.

“It can ask five friends. Together, they will automatically build a workflow, and they will continue addressing the whole task. One agent can have partial knowledge about solving the task, and they can talk with each other and optimize their knowledge. The master of those agents — so the oldest brains in the brain — may learn about which agent is performing better, which one is performing worse.”

A man and woman laughing while having a conversation.

Fersman speaks with Carleton University Industry and Partnership Services (IPS) director Chris Lannon

Agentic AI can learn and adapt in real time, without human interference, she said.

“Yes, humans in the loop, we need to allow for that, but I’m a strong believer in fully autonomous networks,” she said.

“It needs to be able to run completely in a self-driving mode. And in some cases, as we already discussed here, you will not be able to explain the decisions. Because it happens — either it’s a too-big search space, or it’s a too-small real-time loop.”

Automation using AI is one of the contributors to cost reductions within companies, according to Fersman.

She gave the example of a company of 45,000 workers deploying AI. The automation of that department reduces the need to 7,000 people instead of the original 45,000. However, Fersman emphasizes that these workers are not getting fired, but are instead being reallocated to different jobs within the company, optimizing the workforce rather than decreasing it.

The costs of developing AI software is also decreasing, she said.

In the past, AI models have cost millions upon millions to train and implement. In December, US$24 million was spent on the OpenAI o1 model, but a few weeks later Deepseek R1 was deployed at a fraction of that expense, costing just US$880,000.

“Within weeks from their release, the researchers at Stanford and Berkeley are coming out with models of similar precision that were trained for less than $50,” says Fersman.

Efficiency, Emissions, and Ethical Tradeoffs

Like several speakers at this year’s Challenge Conference, Fersman acknowledged certain drawbacks of AI — for example, the high CO2 emissions caused by the huge energy appetites of ChatGPT and other generative AI models.

Fersman said she tries to make her searches using AI worth the pricey cost of CO2 emissions, but notes that she finds herself looking at pictures of cats on occasion, too — a potentially less worthy search.

This does not deter Fersman from her research, however, as she weighs the potential benefits of AI with its negative impacts.

“The important thing,” she said, “is that when you ask the question about the revenue investment of every search, are we winning something from there? Yeah, probably, in many, many cases, we will be winning something for sure. If it comes to, for example, a more reliable method that predicts any failure — and you can prevent a car crash. That’s a good case.”

A sign on a table that reads Sponsor Table Ericsson.


2025 Carleton Challenge Conference Recap

Thursday, May 15, 2025 in
Share: Twitter, Facebook