Alycia Riley
Avocate
Article
In recent years, jurisdictions have been scrambling to develop legislation regulating the development and use of AI. Some jurisdictions have already implemented laws, such as the AI Act in the EU, while others continue to assess how to address the unique challenges AI raises in such a rapidly changing technology landscape.
Within Canada, there are AI regulations that apply in some sector-specific areas but there are no comprehensive laws regulating the use and development of AI (and the wait for such laws may be several years).
However, several organizations have issued standards and risk management frameworks for AI use. This article sets out some of the main frameworks that businesses can use as they develop their internal AI governance practices.
As with the development of any product or adoption of any system, businesses need standards to assess function, benefits, risks and costs. AI risks in particular can present unique risks or aggravate existing risks within your business.
Ultimately, your AI governance program should be specific to your industry and business. There is no one-size-fits-all approach. Businesses should also develop internal AI risk classification systems that can address different use cases. Using frameworks from accredited organizations as part of your AI governance program can help your business explain its values, maintain accountability, and build trust with internal and external stakeholders.
While there is an ever-expanding list of frameworks to consider, they tend to share common principles such as transparency and data protection. We encourage those using or developing AI to review these frameworks and consider to which extent they may incorporate specific risk mitigation measures.
The ISO is a non-governmental organization that publishes thousands of standards and guidance documents agreed upon by international experts. Amongst those pertaining to AI and risk management:
HUDERIA presents a risk-based approach to assessing and mitigating adverse impacts developed for the Council of Europe’s Framework Convention.[1] The framework proposes a collection of interrelated processes, steps and user activities, including (1) preliminary context-based risk analysis, (2) stakeholder engagement, (3) human rights, democracy and rule of law impact assessment, and (4) human rights, democracy and rule of law assurance case. Key principles within the framework include respect for and protection of human dignity, protection of human freedom and autonomy, harm prevention, non-discrimination, transparency and explainability, and data protection and the right to privacy.
Provides a user-friendly tool to evaluate AI systems from a policy perspective. This system considers the widest range of AI across the following dimensions: (1) People & Planet, (2) Economic Context, (3) Data & Input, (4) AI model, and (5) Task & Output. Each dimension contains a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles.[2]
In January 2023, the National Institute for Standards and Technology (“NIST”) published its Artificial Intelligence Risk Management Framework (AI RMF 1.0) to help manage the risks AI poses to individuals, organizations and society. The AI RMF articulates characteristics of trustworthy AI and offers guidance for addressing them. Trustworthy AI requires balancing these characteristics based on the AI system’s context of use.
In July 2024, NIST released the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST-AI-600-1) pursuant to President Biden’s Executive Order on AI. NIST describes this as a cross-sectoral profile of and companion resource for the AI RMF. The Profile recognizes that GAI risks can vary along several dimensions including the scope and stage of lifecycle. Examples of risks the Profile identifies include confabulation (hallucinations), dangerous content, data privacy, harmful bias or homogenization and environmental impacts.
In September 2023, the Minister of Innovation, Science and Industry announced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which we discussed in a prior publication. The purpose of the Code is to provide Canadian companies with common standards and demonstrate responsible development and use of generative AI systems until regulation comes into effect. Developers and managers voluntarily commit to working to achieve the following in advanced generative systems.
In 2019, the Singapore Personal Data Protection Commission (PDPC) released its first edition of the Model AI Governance Framework for consultation. The Model Framework provides guidance to private sector organizations to address key ethical and governance issues when deploying AI solutions. The PDPC released the second edition of the Model Framework in January 2020.[3] The Model Framework consists of 11 AI ethics principles: (1) Transparency, (2) Explainability, (3) Repeatability/reproducibility, (4) Safety, (5) Security, (6) Robustness, (7) Fairness, (8) Data governance, (9) Accountability, (10) Human agency and oversight, and (11) Inclusive growth, societal and environmental well-being.
While not a framework, AI Verify is an AI governance testing framework and software toolkit that validates the performance of AI systems against internationally recognized principles through standardized tests, and is consistent with international AI governance frameworks such as those from the European Union, OECD and Singapore.[4]
We encourage those using or developing AI to review these frameworks and consider to which extent they may incorporate specific risk mitigation measures. For more information, please review each of the frameworks in detail or contact your trusted Gowling WLG professional.
[2] Organization for Economic Cooperation and Development (OECD), OECD Framework for the Classification of AI Systems (February 2022) at p 3, available online.
[3] Personal Data Protection Commission, Singapore’s Approach to AI Governance.
[4] AI Verify Foundation, What is AI Verify?
CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.