photo of Graeme Auld and Benjamin Faveri

Graeme Auld, SPPA Professor (Carleton University) and Benjamin Faveri, Research Fellow in AI Governance, Law, and Policy (Arizona State University)

Benjamin Faveri and Graeme Auld publish a background paper Informing Possible Futures for the use of Third-Party Audits in AI Regulations, that provided the groundwork for Carleton’s recently held workshop on the potential role of third-party audits in the regulation of harmful AI.  Carleton then hosted a public panel that followed on November 10th, which discussed the quickly developing efforts to govern AI set by governments and private actors.

These two events hosted at Carleton took place just after the White House released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI on October 30, 2023 and the United Kingdom hosted the AI Safety Summit from November 1-2, 2023, and a few weeks before the European Union (EU) announced a provisional agreement between its Parliament and Council on the AI Act.

In Canada, the Bill C-27, which includes the AI and Data Act, is currently being considered by the Standing Committee on Industry and Technology, having completed its first and second readings. Canada’s proposed AI and Data Act seeks to establish audit requirements for high-impact AI systems under certain circumstances and includes a proposed role for audits in assuring the actions, policies, and measures taken to manage the risks of AI impacts by entities engaged in regulated activities.

At their most basic, audits are about checking that rules or expectations are being met by the target of the audit, with some level of confidence. The use of audits has expanded considerably in recent decades, and they have become a feature of many proposed and enacted regulatory approaches for governing the risks of negative AI impacts. The background discussion paper – Informing Possible Futures for the use of Third-Party Audits in AI Regulations – introduces key considerations that ought to be accounted for in devising a regulatory approach for AI that uses audits, whether third-party or internal.

The public panel discussed recent developments in the EU, Canada, and other jurisdictions, as well as the evolving private sector initiatives and standards work on responsible and trustworthy AI. Experiences from other governance areas, particularly in the EU, served as references to reflect on how AI governance may develop and what key considerations policymakers ought to account for on issues such as the use of third-party audits in assessing the harmful risks of AI systems.

The public panel was moderated by Graeme Auld, with welcome remarks from Dean Brenda O’Neill. Panelists:
Poster for PanelAshley Casovan, AI Governance Center Managing Director, International Association of Privacy Professionals
Stefan Renckens, Associate Professor, University of Toronto
Adegboyega Ojo, Canada Research Chair in Governance and Artificial Intelligence & Professor, Carleton University

Link to see video recording of public panel discussion AI Governance: Current Developments & Future Directions