Recognizing the many sources of uncertainty in the Artificial Intelligence and Data Act ("AIDA"), Innovation, Science and Economic Development Canada recently offered important insight and clarity into the Government's intended approach to artificial intelligence (AI) regulation. Much of the uncertainty to date is attributable to the fact that the specific AI systems subject to the AIDA, along with the measures the legislation will impose, are to be defined through regulation at a later point in time.
In light of those unknowns, a companion document published on March 13, 2023 aims to "reassure actors in the AI ecosystem in Canada that the aim of the AIDA is not to entrap good faith actors or to chill innovation, but to regulate the most powerful uses of this technology that pose the risk of harm."
Building on this pledge, the document makes clear that "the Government intends to take an agile approach that will not stifle responsible innovation or needlessly single out AI developers, researchers, investors or entrepreneurs." It offers several assurances in support of these objectives:
- Consultations will be forthcoming to determine the approach to regulation
- Regulation will align with the provided guiding principles, and accord with regulation in other jurisdictions
- In initial years, the focus of AIDA enforcement would be on education, establishing guidelines, and helping businesses to come into compliance through voluntary means
- Enforcement would be done with the assistance of external private sector, academic and civil society expertise, to ensure enforcement activities are conducted appropriately in the context of the rapidly developing AI environment
The companion document also provides further guidance on the systems intended to be the primary target of regulation, and sample measures that may be imposed on various entities involved in the development of AI systems.
Examining the companion document's key points
Below, we review the companion document's key points of clarification in detail.
The AIDA is intended to serve as gap-filling legislation to ensure AI-specific risks will not fall through the cracks of existing consumer protection and human rights legislation. The protection of Canadians – particularly vulnerable groups like children, or historically marginalized groups – from collective harms by mitigating the risk of systemic bias in AI systems has been identified as a primary purpose of the legislation.
Industry consultations forthcoming to inform initial regulatory approach
The Government describes the AIDA as a first-step framework for a new regulatory system, and has expressed an intention to build upon this framework through an open and transparent regulatory development process in consultation with stakeholders. Implementation of the initial set of AIDA regulations is expected to take the following path, after Bill C-27 receives Royal Assent:
- Consultation on regulations (six months)
- Development of draft regulations (12 months)
- Consultation on draft regulations (three months)
- Coming into force of initial set of regulations (three months)
Guidance on "high-impact systems"
The Government identified the following as examples of systems that are of interest in terms of their potential impacts:
- Screening systems impacting access to services or employment
- Biometric systems used for identification and inference
- Systems that can influence human behaviour at scale, such as AI- powered online content recommendation systems
- Systems critical to health and safety, such as autonomous driving systems and systems making triage decisions in the health sector
Further, the Government expressed the following to be among the key factors that persons responsible for AI systems must assess in determining whether an AI system is high impact:
- Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences
- The severity of potential harms
- The scale of use
- The nature of harms or adverse impacts that have already taken place
- The extent to which (for practical or legal reasons) it is not reasonably possible to opt-out from that system
- Imbalances of economic or social circumstances, or age of impacted persons
- The degree to which the risks are adequately regulated under another law
Tailoring of measures related to risks
The regulated activities laid out in the AIDA would each be associated with distinct obligations tailored to the context and risks associated with specific regulated activities in the lifecycle of a high-impact AI system.
The specific measures required by regulation would be developed through extensive consultation and would be based on international standards and best practices. The guiding principles to be used in prescribing such measures are:
- Human Oversight, requiring AI systems to provide a means of meaningful oversight, including a level of interpretability appropriate to the context
- Monitoring, requiring measurement and assessment of output
- Transparency, requiring sufficient information be provided to the public to allow them to understand the capabilities, limitations, and potential impacts of the systems
- Fairness and Equity, requiring action be taken to mitigate discriminatory outcomes for individuals and groups
- Safety, requiring proactive assessment and identification of potential harms that could result from use or foreseeable misuse of the system, and taking mitigating measures
- Accountability, requiring governance mechanisms, and proactive documentation of policies, processes and measures implemented
- Validity, requiring systems to perform consistently with intended objectives
- Robustness, requiring systems that are stable and resilient
Tailoring of monitoring obligations
Prescribed monitoring obligations would be proportionate to the level of influence that an actor has on the risk associated with the system. For example, as end users of general-purpose systems have limited influence over how such systems function, developers of general-purpose systems would be the ones responsible to ensure that risks related to bias or harmful content are documented and addressed.
Similarly, businesses involved only in the design or development of a high-impact AI system, but with no practical ability to monitor the system after the development, would have different obligations from those managing its operations. Individual employees would not be expected to be responsible for obligations associated with the business as a whole.
Delayed enforcement and initial voluntary compliance
In the initial years after it comes into force, the focus of AIDA enforcement would be on educating different stakeholders, establishing guidelines and helping businesses to come into compliance through voluntary means. The Government intends to allow ample time for the ecosystem to adjust to the new framework before enforcement actions are undertaken.
Further, smaller firms would not be expected to have governance structures, policies, and procedures comparable to those of larger firms with a greater number of employees and a wider range of activities. Small- and medium-sized businesses would also receive particular assistance in adopting the practices needed to meet the requirements.
The Government would also mobilize external expertise in the private sector, academia and civil society to ensure that enforcement activities are conducted appropriately in the context of a rapidly developing environment.
Administrative Monetary Penalties (AMPs)
AMPs would be designed in a manner proportionate to the objective of encouraging compliance. For example, AMPs could be applied in the case of clear violations where other attempts to encourage compliance had failed. AMPs would also be tailored with respect to the relative size of firms.
Flexible but uncertain: Understanding the Government's approach to date
The Artificial Intelligence and Data Act (AIDA) is just one of three pieces of proposed legislation in Bill C-27. Tabled by the Government of Canada on June 16, 2022, Bill C-27 would also introduce the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA).
The AIDA would be the first piece of legislation in Canada to regulate AI systems in the private sector. If passed, it would impose regulatory requirements for both AI systems generally and those AI systems specifically referred to as "high-impact systems."
From a policy perspective, the Government of Canada has positioned the AIDA as a regulatory tool to protect Canadians, ensure the development of responsible AI in Canada and prominently position Canadian firms and values in global AI development. To achieve these objectives the Government has sought to align the AIDA with existing Canadian legal frameworks, along with legislation and norms from other jurisdictions.
The specific AI systems subject to the AIDA, along with the required measures it will impose, will be defined by regulation.
For example, the key term "high-impact systems" is not defined in the AIDA itself; rather, it will be defined through criteria to be set out in regulations. Further, the measures that will be required to be implemented by persons responsible for high-impact systems will be set out in the regulations. An Administrative Monetary Penalty (AMP) scheme may also be set out via regulation.
Defining the AIDA 's requirements via regulation will allow the Government to respond to industry developments and update the specific systems regulated and measures to be implemented without legislative amendment. While this is efficient and enables swift policy adjustments, it means there is little guidance available within the text of the AIDA itself to assist industry with preparing to comply with the new regulatory system.
The clarifications provided in the Companion document, as described above, are therefore key to understanding and anticipating upcoming AI regulation in Canada.
The AIDA companion document provides early-stage insight to assist those that design, develop, offer or manage AI systems, with a view to helping them better understand the requirements of the AIDA. The draft AIDA and its requirements will continue to develop as Bill C-27 moves through the legislative process, and the industry consultations to which the Government has now committed take place.
For more information on the AIDA, we invite you to review our deep dive into the Act's provisions, as well as our one-page high-level summary. In the meantime, if you would like to discuss this topic further, please contact the authors or a member of Gowling WLG's Cyber Security and Data Protection Group.