Anton Grabolle / Better Images of AI / AI Architecture / CC-BY 4.0

A major issue that remains to be figured out for AI governance is how potential third-party audits will be performed and overseen. This piece, written by Benjamin Faveri, Graeme Auld, and Stefan Renckens, reviews three current challenges to AI audits and presents thoughts on ways forward.

AI Audit Objects, Credentialing, and the Race-to-the-Bottom: Three AI Auditing Challenges and A Path Forward


The contours of artificial intelligence (AI) system’s risk regulations are becoming clearer. A consistent theme in many countries’ regulations is some role for AI audits, with the EU’s AI Act, Canada’s proposed AI and Data Act, and the United States Executive Order on the “Safe, Secure, and Trustworthy Development and Use of AI, ”including language to this effect. Private and non-profit auditing firms are also offering or preparing to offer varied AI audit services (such as training and credentialing AI auditors and performing AI ethics or data audits) to meet this growing demand for AI system and organization audits.

However, efforts to meet AI audit service demands, and by extension, any use of audits by public regulators, face three important challenges. First, it remains unclear what the audit object(s) will be – the exact thing that gets audited. Second, despite efforts to build training and credentialing for AI auditors, a sufficient supply of capable AI auditors is lagging. And third, unless markets have clear regulations around auditing, AI audits could suffer from a race to the bottom in audit quality. We detail these challenges and present a few ways forward for policymakers, civil society, and industry players as the AI audit field matures.

Read full article here,  published by Tech Policy Press.

This article is based on work done for a Social Sciences and Humanities Research Council (SSHRC) Connection Grant, Informing Canadian Regulation of High-Risk Artificial Intelligence.