Prof. Adegboyega Ojo

Prof. Adegboyega Ojo

Adegboyega Ojo is a Professor and the Canada Research Chair in Governance and Artificial Intelligence in the School of Public Policy and Administration, at Carleton University. His specialties include digital government and data-intensive public-sector innovation. He co-edited a volume in 2024 called “Introduction to the Issue on Artificial Intelligence in the Public Sector: Risks and Benefits of AI for Governments.” He spoke to PANL Perspectives about AI tools and technology in the public and nonprofit sectors.

Question: How is AI being used within the public and nonprofit sectors?

The federal government recently released "AI Strategy for the Federal Public Service 2025-2027," with four priorities: (1) establish an AI Centre of Expertise to support and to help coordinate government-wide AI efforts; (2) ensure AI systems are secure and used responsibly; (3) provide training and talent development pathways; and (4) build trust through openness and transparency in how AI is used.

The federal government recently released “AI Strategy for the Federal Public Service 2025-2027,” with four priorities: (1) establish an AI Centre of Expertise to support and to help coordinate government-wide AI efforts; (2) ensure AI systems are secure and used responsibly; (3) provide training and talent development pathways; and (4) build trust through openness and transparency in how AI is used. For more info: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html.

Adegboyega Ojo: Despite all the media frenzy about AI, and despite all we hear from government about AI, we have very little information on how public-sector and nonprofit-sector organizations are using AI technology and tools. They might be using generative AI tools like ChatGPT, but is this use driven largely by individual use, or is it part of the broader organization’s strategies and plans? As a researcher, one way to find out is to look at Algorithmic Impact Assessment (AIA) reports on the Canadian Open Data Portal. Unfortunately, there are very few AIAs there, and we have to submit an Access to Information and Privacy (ATIP) request — and that’s not usually very easy. Also, unfortunately, the adoption rate of AI is extremely low in the nonprofit sector. My team and I are looking at these issues.

In general, the adoption of AI in Canada seems to be slower than in other countries, and that’s not necessarily a bad thing, as it’s good to follow a thoughtful and cautious approach to AI implementation. Last year, a KPMG study found that 46% of Canadians use generative AI in their workplaces. Virtual assistants, chatbots, AI translation, and other generative AI tools are already embedded in customer relationship management systems, in marketing applications, on websites, and everywhere.

Nonprofit organizations are already using these indirectly – for instance, when using Copilot within Microsoft Office. However, in terms of direct use, the pace in the nonprofit sector is much slower at an organizational level. Individual workers might be using tools like Perplexity for searches, or Gemini or ChatGPT and other tools for brainstorming, research or drafting documents, but not much is happening in terms of organizational AI use for marketing, fundraising campaigns, and so on.

Question: How can generative AI technology help our sector in any way?

In 2024, Canada updated its "Guide on the use of generative artificial intelligence": https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html

In 2024, Canada updated its “Guide on the use of generative artificial intelligence”: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html

AI tools are becoming more sophisticated, particularly in the area of curated research, deep analysis and reporting, and I think adoption will ramp up soon. But users still have to check everything, because of AI hallucinations and other mistakes. For example, during queries, you should always ask for linked references for searches. A website address isn’t enough, and you have to click and check. The AI tools, like ChatGPT, Perplexity and Grok3, still make a lot of mistakes but are really good at providing linked references.

And AI tools are good at critiquing and analyzing documents. For example, you can compare a document containing specifications with another document intended to meet those specs, and request a detailed critique. AI excels at this type of analysis – at evaluating patterns, consistency, and alignment for instance – and excels less as a repository of factual knowledge.

Training employees is key. Without proper guidance, users might try AI, see the mistakes, get discouraged and think, “Oh my goodness, imagine if we sent this to a client. We’d be in serious trouble.” As a result, they may abandon the AI tool entirely. People need training on when and how to use AI effectively, understanding different contexts, modes, tasks and subtle nuances. Training and capacity building are key, even for tasks that appear straightforward.

Question: What potential harms or negative issues should we be aware of with generative AI tools?

In a case about a chatbox giving bad advice about plane tickets, a passenger took Air Canada to small claims court and won. The airline had argued that the chatbot was responsible for its own bad actions, but the adjudicator disagreed and found the airline liable, and ordered Air Canada to pay compensation to the passenger. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

In a case about a chatbot giving bad advice about plane tickets, a passenger took Air Canada to small claims court and won. The airline had argued that the chatbot was responsible for its own bad actions, but the adjudicator disagreed and found the airline liable, and ordered Air Canada to pay compensation to the passenger.

Adegboyega Ojo: AI is changing decision-making, policy-making, service delivery and work in general. For example, Microsoft studied the effect of generative AI use by knowledge workers and found that it may be limiting their critical thinking skills. Will a similar dependence happen in the public sector? We need to understand the impact and effects of AI, both positive and negative.

Even when AI has a positive impact, we must ask, “Positive for whom?” For instance, if an organization provides a generative-AI-based chatbot, a tool that’s gaining popularity, some users might find it convenient and efficient. But others might find the same chatbot service difficult to use, unfriendly and even harmful. A notable example is the case against Air Canada, where a chatbot misled a customer to believe they were eligible for a refund for their ticket. The case has since become a poster child for highlighting the risks of AI-driven services.

While AI does a lot of good, it also has the potential to cause harm and safety issues. There are cross-border collaborative efforts on shared understanding of things like taxonomies of AI harms based on reported incidences. This includes privacy issues, copyright issues, and different forms of bias and potential governance and legal issues. We’ll likely see a more coordinated approach to addressing these issues with time.

Also, one of the things I’ve heard repeatedly in the nonprofit space is the need for clear and practical use cases of AI in action. People want to see tangible benefits and compelling business cases for generative AI in their organizations, with potential positives and negatives clearly explained.

Prof. Adegboyega Ojo is on LinkedIn. Photo of apps is courtesy of Saradasish Pradhan.

Sign up for PANL Perspectives, MPNL’s free newsletter.

Friday, March 7, 2025 in , , ,
Share: Twitter, Facebook