Major efforts are afoot to govern the ethics of artificial intelligence. In a recent paper, Graeme Auld, Amanda Clarke, Ashley Casovan, and Benjamin Faveri offer a dynamic look at pathways along which private governance is engaged to shape future AI governance. Taking lessons from other private governance cases for sustainability, the pathways distil different public-private interactions centered on the motivations and strategies of corporate and civil society actors.
The paper is part of a special issue on AI governance with the Journal of European Public Policy that will soon be out.
Graeme Auld, Ashley Casovan, Amanda Clarke & Benjamin Faveri (2022) Governing AI through ethical standards: learning from the experiences of other private governance initiatives, Journal of European Public Policy, DOI: 10.1080/13501763.2022.2099449
ABSTRACT
A range of private actors are positioning varied public and private policy venues as appropriate for defining standards governing the ethical implications of artificial intelligence (AI). Three ideal-type pathways – oppose and fend off; engage and push; and lead and inspire – describe distinct sets of corporate and civil society motivations and actions that lead to distinct roles for, and relations between, private actors and states in AI governance. Currently, public-private governance interactions around AI ethical standards align with an engage and push pathway, potentially benefitting certain first-mover AI standards through path-dependent processes. However, three sources of instability – shifting governance demands, focusing events, and localisation effects – are likely to drive continued proliferation of private AI governance that aim to oppose and fend off state interventions or inspire and lead redefinitions of how AI ethics are understood. A pathways perspective uniquely uncovers these critical dynamics for the future of AI governance.