Policies and Guidelines
Review policies and guiding documents on the use of AI from Carleton University, government agencies and more.
Carleton Policies and Guidelines
Carleton has a number of policies that tie into the use of AI, including:
- Acceptable Use of Information Technology and Email: Defines the university’s position on the provisioning, use operation and decommissioning of IT resources
- Data Protection and Risk Management Policy: Defines the requirements for classifying and protecting the university’s physical and digital data assets to mitigate security risks
- Adoption of Technology Enhanced Learning Resources: Establishes guidelines for the use of third-party digital learning resources in courses, ensuring they enhance student learning while considering affordability and accessibility
- Responsible Conduct of Research Policy (Section 5): Promotes and facilitates the conduct of all research
You can review these policies in full on the University Secretariat’s website.
The Office of Research Ethics also provides Preliminary Guidance on the Use of Artificial Intelligence in Human Research.
Remember, Microsoft Copilot is Carleton’s only approved GenAI platform because it offers Enterprise Data Protection and has successfully completed the university’s Data Protection Risk Assessment (DPRA) process. Learn more about AI data protection.
Government Guidelines and Principles
Principles for Responsible, Trustworthy and Privacy-Protective genAI
Federal and provincial privacy authorities have launched a set of principles to advance the responsible, trustworthy and privacy-protective development and use of genAI in Canada.
Guide on the Use of Generative Artificial Intelligence
This document, prepared by the Government of Canada, provides guidance to federal institutions on their use of genAI tools.
Ethics of Artificial Intelligence
This website, prepared by UNESCO, provides a global standard for AI ethics, ensuring the protection of human rights and dignity through principles such as transparency, fairness and human oversight.