The Trustworthy AI Lab @ TIM, Carleton University

The mission of the Trustworthy AI Lab is to advance AI research, education, policy and best practices to support the adoption of ethical, responsible, mindful, sustainable and trustworthy AI systems and digital technologies in general.

The Trustworthy AI Lab is hosted and managed by the Technology Innovation Management Program at Carleton University. It is an interdisciplinary initiative involving multiple international researchers from various disciplines and providing a peer forum for academic and industrial research, value creation projects and applications. In addition, the Lab offers workshops and continuous learning possibilities for citizens, local and regional private entities, professionals, engineers and developers, and students who want to learn more about the ethical and trustworthy use of AI technologies and systems. A key aspect of the Trustworthy AI Lab is the engagement and support of student projects focusing on the evaluation of the trustworthiness of AI systems and digital technologies adopted by external organizations from both the private and public sectors.

The Trustworthy AI Lab is affiliated with the Z-Inspection® Initiative. It adopts the Z-inspection trustworthiness process in combination with a comprehensive design thinking approach to the assessment of the trustworthiness of real-life AI systems and applications.

Lab Members:

  • Stoyan Tanev, PhD, Associate Professor, TIM Program, Sprott School of Business, Carleton University
  • Somaieh Nikpoor, PhD, Adjunct Professor, TIM Program, Sprott School of Business; Lead-AI Strategy and Data Science at the Government of Canada; Certified Z-Inspection Teaching Professional
  • Tony Bailetti, PhD, Associate Professor, TIM Program, Sprott School of Business, Carleton University
  • Nuša Fain, PhD, Assistant Professor, TIM Program, Sprott School of Business, Carleton University
  • David Hudson, PhD, Adjunct Professor, TIM Program, Sprott School of Business; ICT Advisor at Innovation, Science and Economic Development Canada, Ottawa, Ontario, Canada
  • Michael Weiss, PhD. Associate Professor, TIM Program, Systems & Computing Engineering Department, Carleton University
  • Daniel Cardenas, M.Eng. Contract Instructor, TIM Program, Sprott School of Business, Carleton University
  • Roberto V. Zicari, PhD, Affiliated professor, Yrkeshögskolan Arcada, Helsinki, Finland; Adjunct Professor, Seoul National University, South Korea
  • Norm Vinson, PhD, Human-Computer Interaction (HCI) Research Officer at National Research Council Canada

Teaching Activities:

The Trustworthy AI Lab members are currently involved in the following educational activities:

  • TIMG 5204, Responsible Artificial Intelligence, A summer Masters-level course of the Technology Innovation Management Program structured around the Z-Inspection evaluation framework.
  • Design Thinking for Digital Transformation (DT4DT) – 3 days professional development program for aspiring professionals and managers interested in adopting trustworthy digital technologies in driving digital business transformation initiatives.

Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

The Z-Inspection® process is distributed under the terms and conditions of the Creative Commons (Attribution-Non Commercial-Share Alike CC BY-NC-SA) license.

Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics https://oecd.ai/en/catalogue/tools/z-inspection

New Pilot Project: Assessing Trustworthiness of Generative AI Applications in the Context Higher Education

The Trustworthy AI Lab @ TIM, Carleton University is participating in this pilot project of the Z-inspection® initiative which aims at assessing the use of Generative AI in use cases emerging within the context of higher education.

This pilot project focuses on assessing the ethical, technical, domain-specific (i.e. higher education) and legal implications of the use of Generative AI-products/services within the university context.

We follow the UNESCO guidance for policymakers on AI and education, in particular, the policy recommendation 6: Pilot testing, monitoring and evaluation, and building an evidence base.

Approach

We encourage and support students to adopt the Z-Inspection® process in identifying and engaging with an interdisciplinary team of experts for the sake of assessing the trustworthiness of Generative AI for their specific final Master projects.

For more info see: https://z-inspection.org/pilot-project-assessing-trustworthiness-of-the-use-of-generative-ai-for-higher-education/