Canada publishes Voluntary Code of Conduct on Generative AI Systems

8 minute read
12 October 2023

As previously published, the Federal Government tabled Bill C-27 in June 2022, part of which includes the Artificial Intelligence and Data Act ("AIDA"). In past articles, we commented on how AIDA, as presently drafted, contains much uncertainty because those AI systems subject to AIDA, along with the measures the act will impose, are to be defined through regulation at some point in the future.

Recognizing the many sources of uncertainty in AIDA, Innovation, Science and Economic Development Canada offered insight into the Government's intended approach to AI regulation through its publication of a companion document in March 2023. This document aimed to "reassure actors in the AI ecosystem in Canada that the aim of the AIDA is not to entrap good faith actors or to chill innovation, but to regulate the most powerful uses of this technology that pose the risk of harm."

Innovation, Science and Economic Development Canada has now published a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which aims to address and mitigate certain risks associated with the use of generative AI. In advance of operational clarity from AIDA and any enacted regulations, this Code identifies measures that firms should apply when developing and/or managing generative AI systems with general-purpose capabilities, as well as additional measures when firms make their AI systems widely available (and therefore subject to a wider range of potentially harmful or inappropriate use).

As part of this voluntary commitment, developers and managers of advanced generative AI systems commit to achieving the following outcomes, based on their respective roles and the use of the AI system.

  1. Accountability – This applies to all developers and managers , and includes:
    • developing a comprehensive risk management framework (i.e. policies, procedures and training);
    • sharing information and best practices on risk management with firms playing complementary roles in the AI ecosystem; and
    • for developers creating public use generative AI systems, employing multiple lines of defence prior to release (e.g. third-party audits).
  2. Safety – This includes:
    • for all developers and managers , conducting a comprehensive assessment of the AI system's reasonably foreseeable potential adverse impacts, including risks associated with inappropriate or malicious use; and
    • for developers of all generative AI systems, including public use,
      • implementing proportionate measures to mitigate against identified risks of harm, and
      • making appropriate system usage information available to downstream developers and managers, including mitigation measures.
  3. Fairness and Equity – This applies specifically to developers of all generative AI systems, including public use, and includes:
    • assessing and curating datasets used for training to manage data quality and potential biases; and
    • implementing diverse testing methods and measures to assess and mitigate risk of biased output prior to release of the AI system.
  4. Transparency – This includes:
    • for developers of public use generative AI systems,
      • publishing information on the system's capabilities and limitations,
      • implementing a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g. watermarking), and
      • publishing a description of the types of training data used to develop the system, and measures taken to identify and mitigate risks; and
    • for managers of all generative AI systems, including public use, ensuring systems that could be mistaken for humans are clearly and prominently identified as AI systems.
  5. Human Oversight and Monitoring – This includes:
    • for developers of all generative AI systems, including public use, maintaining a database of reported incidents after deployment, and providing updates to ensure effective mitigation measures; and
    • for managers of all generative AI systems, including public use, monitoring the operation of the system for harmful uses or impacts after it is made available (i.e. third-party feedback channels, and informing the developer and/or implementing usage controls to mitigate harm).
  6. Validity and Robustness – This includes:
    • for developers of all generative AI systems, including public use:
      • using a wide variety of testing methods across a spectrum of tasks and contexts prior to deployment to measure performance and ensure robustness,
      • employing adversarial testing to identify vulnerabilities,
      • assessing cyber-security risk (including data poisoning) and implementing proportionate measures to mitigate risks, and
      • benchmarking to measure the model's performance against recognized standards; and
    • for managers of public use generative AI systems, assessing cyber-security risk (including data poisoning) and implementing proportionate measures to mitigate risks.

The Code already has a number of prominent signatories, signalling its importance within the current Canadian framework of generative AI system development and maintenance. The Code may also have some influence on any amendments to the present draft of AIDA. Notably, with Bill C-27 currently before the Standing Committee on Industry and Technology, the Federal Government recently proposed a number of amendments to add structure and specificity to AIDA, including:

  • the development of classes of systems typically considered "high-impact" (e.g. AI systems making decisions about loans and employment);
  • creating distinct obligations for general-purpose AI systems (e.g. Chat GPT) that available for public use and can generate text, picture and audio;
  • clearer differentiation within the AI value chain (i.e. developers versus managers of AI systems);
  • strengthening and clarifying the role of the proposed AI Commissioner, including their ability to share information and cooperate with other regulators (e.g. the Privacy Commissioner and Competition Commissioner); and
  • amendments to align with legislation in the EU and other OECD jurisdictions to ensure Canadian companies are inter-operable within other jurisdictions and have access to international markets.

Our Artificial Intelligence team is continuing to monitor the development of AIDA and related developments closely. For more information, please contact the authors.

NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.