Projet de loi C-27 : détails sur la Loi sur l’intelligence artificielle et les données proposée au Canada
(article en anglais)

21 minutes de lecture
28 octobre 2022

We first wrote about the federal government's newest proposal to introduce sweeping changes to Canadian privacy laws in our article published on June 24, 2022. As we discussed in that article, Bill C-27 resurrects the former Bill C-11 and retains its core elements, but also introduces new legislation: the Artificial Intelligence and Data Act ("AIDA").



In this article, we take a deeper dive into this newly proposed legislation and survey other international developments in artificial intelligence ("AI") regulation. At the time of publishing, Bill C-27 remains at the preliminary stages of its second reading and has not received substantive changes.

AI application and limitations

AI has innumerable areas of application, such as facial or speech recognition, self-driving vehicles, chatbots, navigation, targeted marketing, personalized learning and recruitment support.

While AI improves our lifestyle, it can also create unintended and unforeseen consequences. A key concern is the potential for misuse of personal information, as vast amounts of data are required to develop an AI system. In addition, AI systems can create biased outcomes based on prohibited grounds of discrimination such as sex, gender and race. Considering the vast application and scaling of AI, any bias could have considerable implications.

In a release from Innovation, Science and Economic Development Canada, the federal government stated that the AIDA will introduce new rules to strengthen Canadians' trust in the development and deployment of AI systems.

What is the purpose of the AIDA?

The stated purposes of the AIDA are to:

  1. regulate international and interprovincial trade and commerce in AI systems by establishing common requirements applicable across Canada for the design, development and use of those systems; and
  2. prohibit certain conduct in relation to AI systems that may result in serious harm to individuals or harm to their interests. The AIDA defines "harm" as (a) physical or psychological harm to an individual, (b) damage to an individual's property, or (c) economic loss to an individual.

We address the definition of an AI system in our previous article.

To which entities will the AIDA apply?

The regulation of AI under the AIDA will focus on those persons carrying out a "regulated activity," which means any of the following in the course of international or interprovincial trade and commerce:

  1. processing or making available for use any data relating to human activities for the purpose of designing, developing or using an AI system; or
  2. designing, developing or making available for use an AI system or managing its operations.

These references are broad such that it is easy to imagine many AI systems would fall within the meaning of a regulated activity. The AIDA imposes regulatory requirements for both AI systems generally and those AI systems specifically referred to as "high-impact systems," which we discuss in further detail below.

As noted above, the AIDA limits application to interprovincial and international commerce, ostensibly leaving intra-provincial AI matters to the regulation of the provinces, if and when they so choose. This is narrower than the Personal Information Protection and Electronic Documents Act's ("PIPEDA") application to all organizations that collect, use or disclose personal information in the course of commercial activities, except where there is substantively similar provincial legislation.

The AIDA also exempts the federal government from its application. Most federal government entities must comply with an existing Directive on Automated Decision Making (the "Directive"), the objective of which is to ensure such systems are used in a way that reduces associated risks to Canadians and federal institutions. Although the Directive is limited to automated decision-making, it may provide some insight into the regulatory framework that will follow if the AIDA becomes law. As part of the Directive, government agencies must complete an algorithmic impact assessment of a system before production. High-impact systems are assessed on various factors, with impact levels varying based on the system's effect on the rights, health and wellbeing of individuals or communities, economic interests and sustainability of an ecosystem, reversibility and duration.

Canadian regulation of AI systems in the private sector

The degree of regulation of private sector AI systems under the AIDA will depend in part on whether the system falls within the definition of a "high-impact system," with such systems being subject to a higher degree of regulation. As presently drafted, AI systems subject to the AIDA will fall into one of only two categories: those that are high-impact systems and those that are not (meaning AI systems within the scope of the AIDA but do not meet the definition of a high-impact system). The differences between regulated activities and high-impact systems are outlined below.

By contrast, regulatory developments in the EU (some of which are discussed below) consider degrees of impact. The federal government's Directive also uses impact level definitions ("little or no" impact to "very high" impact) and, based on the degree of impact, the Deputy Head or Treasury Board will be responsible for approval of the system. While the Treasury Board may impose consequences for non-compliance, this is not the same as the broader enforcement powers under the AIDA.

Anonymized data

Under the proposed Consumer Privacy Protection Act, once personal information has been anonymized, the Act no longer applies to such information. By contrast, under the AIDA , a person who carries out a regulated activity and who processes or makes available for use anonymized data in the course of that activity will be required to establish measures with respect to (a) the manner in which data is anonymized, and (b) the use or management of anonymized data. The AIDA does not define anonymized data.

High-impact systems

The existing definition of a "high-impact system" is vague and will be addressed by criteria to be established by regulation. It is possible that what constitutes a "high-impact system" will be similar to what the EU defines as a "high-risk" AI system or it may take some of its framework from the Directive. At this time, it is difficult to evaluate the impact and scope of the AIDA , and we will have to wait for the regulation to define a high-impact system.

The AIDA prohibits certain conduct in relation to a "high-impact system" that may result in serious harm to individuals or biased outputs. The AIDA specifies that "biased output" means "content that is generated, or a decision, recommendation or prediction that is made, by an [AI] system and that adversely differentiates… in relation to an individual on one or more of the prohibited grounds of discrimination set out in… the Canadian Human Rights Act, or on a combination of such prohibited grounds…" However, biased output will not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to prohibited grounds.

Those responsible for an AI system will be required to assess whether the AI system is a high-impact system. Where an AI system meets this definition, the person responsible must:

  1. establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system;
  2. establish measures to monitor compliance with the mitigation measures and the effectiveness of those mitigation measures;
  3. where the system is made available for use, publish on a public website a plain-language description of the system that explains, among other things, how the system is intended to be used, the types of content that it is intended to generate and the types of decisions, recommendations or predictions it is intended to make, and the risk mitigation measures established;
  4. where the person is managing the operation of the system, publish on a public website a plain-language description of the system that explains, among other things, how the system is used, the types of content that it generates and the decisions, recommendations or predictions that it makes, and the mitigation measures established; and
  5. notify the Minister if use of the system results or is likely to result in material harm.

Audits and information sharing

The Minister responsible for administering the AIDA will have considerable powers under the act to promote and ensure compliance.

One of the options available to the Minister to ensure compliance is the ability to conduct or direct an audit by a qualified person—at the expense of the person being audited—where the Minister believes there has been a contravention of certain sections of the act. Unlike the existing PIPEDA framework, which has minimal enforcement powers, the potential implications of an audit for those operating AI systems are substantial. The Minister may:

  1. require any person responsible for a high-impact system to cease using it or making it available for use where the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm;
  2. require the audited person implement any measure specified in the order to address anything referred to in an audit report; or
  3. require a person to publish on a publicly available website certain information, including audit details, so long as it does not require disclosure confidential business information.

Those subject to an order must comply with the order. As another remedial measure, the Minister may file a copy of any order with the Federal Court.

While there are obligations on the Minister to take measures to maintain confidentiality, there are several circumstances in which the Minister may disclose information to third parties, including analysts. Where the Minister believes the information obtained may also constitute violations of certain legislation, the Minister may also disclose this information to the entities responsible for enforcing those statutes. These entities include the Privacy Commissioner and the Canadian Human Rights Commission (or their provincial counterparts), as well as the Commissioner of Competition and the Canadian Radio-Television and Telecommunications Commission. This enhanced information sharing can considerably expand the scope of regulatory review by government compliance and enforcement agencies.

Recordkeeping and production

Persons carrying out any regulated activity must also retain records describing, in general terms, the measures established with respect to the anonymized data, high-impact system assessments, mitigation and monitoring obligations specified above. Regulations may require retention of additional records.

The responsible Minister, by order, may require production of these records. In addition, where the Minister has reasonable grounds to believe that the use of a high-impact system could result in harm or biased output, the Minister may require, by order, a person to provide specific records relating to that system.

Administration

In addition to limited regulation-making authority, the AIDA provides authority for the Minister to designate a senior official, the "Artificial Intelligence and Data Commissioner," whose role would be to assist the Minister in the administration and enforcement of the AIDA. The Minister may also designate analysts and establish an advisory committee.

Administrative penalties

Those violating the AIDA can be liable for administrative monetary penalties. The Governor-in-Council will have authority to create an enforcement scheme, giving it the power to:

  1. designate violations;
  2. classify violations as minor, serious or very serious;
  3. commence proceedings;
  4. define available defences;
  5. determine the range and amount of administrative penalties that may be imposed;
  6. regulate reviews or appeals of findings that a violation has been committed and imposition of administrative monetary penalties;
  7. regulate compliance agreements; and
  8. regulate the persons or classes of persons who may exercise any power, or perform any duty or function, in relation to the scheme, including the designation of such persons or classes of persons by the Minister.

Offences related to AI systems

Violations of the AIDA can also constitute an offence, which in turn can result in severe fines and, in prescribed circumstances, potential imprisonment. However, the AIDA requires an election: a person may be subject to an administrative penalty for a violation or more serious sanctions of an offence. We provide further details regarding contraventions of the AIDA in our previous article.

International developments in artificial intelligence regulation

Countries, regions and inter-governmental organizations are working on guidance, standards, regulation and laws specific to AI. Notable international collaborations include the Global Partnership on Artificial Intelligence (GPAI, which has 25 member countries, including Canada), UNESCO's work on AI and projects by standard-setting organizations, including the Institute of Electrical and Electronics Engineers (IEEE) and the International Telecommunication Union (ITU). Emerging regulation and general concerns over AI ethics may also produce a new industry offering "AI assurance."

There is broad international consensus on the key legal and regulatory challenges of AI: safety (including robustness of performance and cyber security), transparency and explainabilty, accountability, human control, bias mitigation, and privacy protection (See, for example, the European Commission's Ethics guidelines for trustworthy AI). There is less consensus on whether and how to tackle the broad potential and current social and economic effects of widespread AI adoption.

The most advanced and far-reaching proposal for "cross-cutting" regulation of AI per se is the European Union's proposed AI Act. Like the EU's privacy laws (the "GDPR"), this is expected to become a de facto requirement for companies internationally (the so-called "Brussels Effect"). The draft AI Act prohibits some uses of AI (such as "subliminal techniques") and defines others as "high-risk" (such as biometric identification), for which it requires special measures including with respect to risk management, data governance, technical documentation, record keeping, transparency, human oversight, accuracy, robustness and cyber security. The latest draft also includes requirements for general AI systems that could be used for high-risk applications. The AI Act is not expected to come into force before 2024 but companies should start addressing it now, as it will require the development and implementation of technical solutions that may have a material effect on product and service development.

Below the level of cross-cutting regulation, sector-specific regulators are working on frameworks for AI use. For example, work in the UK is ongoing by, among others, the Ministry of Justice, the Law Commission, Department of Transport, the Civil Aviation Authority, the Information Commissioner, the Competition and Markets Authority, the Financial Conduct Authority, the Medicines and Healthcare products Regulatory Agency and the National Health Service. However, the shortage of AI specialists hampers regulatory work internationally (see, for example, this report by the UK's Turing Institute).

Since AI is of general application, across all sectors and activities, all laws potentially apply to the use of AI and some, particularly relating to privacy and IP, are already significant considerations.

Our previous articles on AI in France, China, Singapore, the UAE and the UK provide further information on current and future AI regulation in those jurisdictions.

Next steps

As Bill C-27 is only at the second reading stage, there will likely be much more debate and potential amendments as the bill makes its way through Parliament. International developments regarding the regulation of AI are likely to have a material effect on subsequent changes to the AIDA , and any regulations enacted after it becomes law.

If you would like to discuss further or have any questions, please contact the authors or a member of the Employment, Labour & Equalities Group.

This article was written with editorial support from members of our Canadian Privacy Group: Wendy Wagner, Naïm Antaki and Chris Oates.


CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.

Sujet(s) similaire(s)   Travail, emploi et droits de la personne