Artificial intelligence is revolutionizing how we work—reshaping everything from routine tasks to strategic decision-making across all levels of an organization. And while the potential benefits are vast, they come hand-in-hand with equally significant challenges. As excitement around AI’s capabilities continues to grow, so too must our awareness of its limitations.

This isn’t just about waiting for regulators to catch up. Employers are already facing real risks, from flawed AI outputs and “hallucinations” to privacy violations and compliance gaps. The practical and legal implications are mounting and organizations that don’t proactively address them could be exposed to liability and reputational damage.

This article explores the risks associated with AI in the workplace and offers practical advice for employers to mitigate these risks.

The current state of AI regulation in Canada and beyond

Canadian employers navigating the use of AI must do so within a growing patchwork of legal and regulatory frameworks. Beyond existing privacy laws,[1] AI raises complex issues around data protection, intellectual property, employment standards and transparency.

With provincial, federal and international regulators each taking a slightly varied approach, employers need to stay attuned to the direction of regulation in the jurisdictions where they operate. Staying compliant isn’t just a legal necessity—it’s a strategic imperative for reducing risk and ensuring responsible AI implementation.

Provincial regulation

The provinces of Alberta, British Columbia and Québec have privacy laws that apply to employment relationships in the private sector.

In Québec, this includes an obligation to inform an individual if using their personal information to render a decision based solely on automated processing of that information. The individual also has a right to know about the information used in making the decision, the principal factors and parameters that led to it, and to have their personal information corrected. There is also a right to make observations to an employee who may review the decision.

While Ontario does not have comparable legislation, 2022 amendments to the Employment Standards Act, 2000 (ESA) require certain employers to implement an electronic monitoring policy, which applies to “all forms of employee and assignment employee monitoring that is done electronically.” As of January 1, 2026, certain Ontario employers will also be required to disclose the use of AI systems during the hiring process in publicly-advertised job postings.

More recently, the Strengthening Cyber Security and Building Trust in the Public Sector Act came into force on January 29, 2025. This act introduced the Enhancing Digital Security and Trust Act, 2024, which creates significant new obligations regarding privacy, cyber security and the use of AI in the Ontario public sector.

Even in the absence of legislation, there is a wealth of guidance regarding employee privacy rights in labour arbitration jurisprudence, which will have a ripple effect on the implementation of AI in workplaces.

Federal regulation

Currently, there is no federal legislation that regulates the use of AI in the commercial or employment context, though the Personal Information Protection and Electronic Documents Act continues to govern federally-regulated employers’ collection of personal employee information.

The former Bill C-27 p sweeping changes to modernize private sector privacy legislation, including the enactment of the Artificial Intelligence and Data Act (AIDA), a topic of extensive discussion by the Standing Committee on Industry and Technology. While Bill C-27 died with the prorogation of Parliament earlier this year, Prime Minister Mark Carney has expressed his government’s intent to keep AI regulation top of mind and has appointed the first-ever AI federal minister.

International regulation

The European Union continues to be at the forefront of regulation with the coming into force of the risk-based Artificial Intelligence Act (EU AI Act) in August 2024.

[2]The EU AI Act applies to public and private actors (both inside and outside the EU) where an AI system, as defined therein, is available in the EU market or its use has an impact on individuals located in the EU. Use of AI systems where there are employment implications constitute “high-risk” systems since those systems may appreciably impact future career prospects and livelihoods of these persons.

Practical considerations

When implementing AI in the workplace for employees to use in their work, it is important to remember that AI has its practical limitations, such as hallucinations which occur when AI systems generate incorrect or nonsensical information. These errors can occur due to various factors, including inadequate training data, algorithmic biases or system malfunctions. In the workplace, hallucinations can lead to misinformation and flawed decision-making. In the broader context, such errors can create a lack of confidence in work product and even reputational damage to the professional or business.

As AI use increases in the workplace, so too have instances of cautionary tales resulting from poor output and lack of user care and diligence.

A California decision dated May 5, 2025 issued certain sanctions against two law firms that submitted briefs containing “bogus AI-generated research.”[3] The summary of ruling makes clear that the Special Master appointed for this case considered the actions of plaintiff’s counsel constituted bad faith and that counsel was reckless in (1) failing to disclose the use of AI at the outset, (2) failing to cite-check the original brief, and (3) re-submitting a defective, revised brief without adequate disclosure of the use of AI.[4]

Just one day later, another example of AI misuse occurred when a Toronto judge found the applicant’s law firm used AI to generate legal argument based on fake cases.[5] Interestingly, the lawyer indicated that her firm does not typically use AI but that she would “have to check” with her clerk.[6]

Of course, such errors are not limited to the legal profession; they can affect any industry where AI is used to generate content or make decisions. These incidents highlight the dangers of relying upon AI without proper verification and underscore the importance of human oversight.

Tips for Canadian employers

The issues created by these cautionary tales are capable of being resolved, or at least reduced, with proper policies and procedures. Employers should consider the following.

  1. Develop an AI strategy: Employers should assess which objectives they want to achieve with AI implementation and whether the proposed AI tool will achieve that objective. Reasons for implementing AI may include:
    • Monitoring employee productivity, such as computer usage and output patterns.
    • Monitoring compliance with policies, such as scanning of chat communications.
    • Flagging potential data leaks or identifying instances of unauthorized access to sensitive information.
    • Improving efficiency and productivity by automating repetitive tasks.
    • Enhancing decision-making processes through data analysis and predictive modeling.
    • Reducing operational costs by streamlining workflows.
    • Providing better customer service through AI-driven chatbots and support systems; and fostering innovation by leveraging AI for research and development.

    In all case, employers must ensure that their proposed AI use is compliant with employee privacy rights, applicable laws and their own policies.

    This will vary with sector and jurisdiction but may entail performance of a privacy risk assessment that considers, among other things, the purposes of using AI, whether AI will effectively meet those purposes, whether less privacy invasive means are available to achieve those purposes, and whether the loss of privacy is proportional to the resultant benefits.

    Employers should be transparent about their use of these tools, clearly communicate their scope and purpose to employees and ensure that use of AI is proportionate, necessary and sufficiently accurate/reliable in achieving the objective.

  2. Develop clear AI use policies: Employers should create comprehensive policies outlining the acceptable use of AI in the workplace by employees. These policies should address issues such as:
    • Private versus public AI platforms
    • Strengths and weaknesses of company-permitted AI tools
    • Acceptable and unacceptable use cases
    • Ensuring reliability of output
    • Data privacy and client confidentiality
    • Intellectual property
    • Ethical considerations
    • Disclosure protocols in AI-generated work product
    • The responsibilities of employees when using any AI tool

    Employers should also consider whether their intended use of AI impacts other policies, such as their response protocol when an individual requests access to their personal information.

  3. Incorporate AI use in employee disclosure agreements: Employers should make clear to all employees that the obligation to maintain and safeguard confidentiality of company confidential information and trade secrets extends to employees’ use of AI tools during employment and post-employment.
  4. Implement robust verification processes: As part of an acceptable use policy, employers should establish rigorous verification protocols to ensure the accuracy of AI-generated output. This includes cross-checking information with reliable sources and involving human experts in the review process. As a best practice, the individual performing quality assurance check of the AI-generated content should be different than the person who first generated the content from the AI platform.
  5. Invest in employee training: Training employees to understand AI systems and their limitations is crucial. This includes educating staff on when AI use is inappropriate and how to analyze output with a critical lens to identify potential errors.
  6. Regularly update AI systems: Keeping AI systems updated can reduce the risk of errors. Employers should work with AI vendors to ensure systems are regularly maintained and improved.
  7. Engage with legal and ethical experts: Consulting with legal and subject-matter experts can help employers navigate the complexities of AI usage, ensuring compliance with applicable regulations and industry standards.
  8. Stay informed: Keep up-to-date with developments in AI technology and relevant legal requirements, including privacy and employment standards.

 

Next steps for employers

While AI may offer significant advantages to Canadian workplaces, it also presents significant risks that employers should manage carefully.

By implementing robust verification processes, investing in employee training, and adhering to legal and ethical standards, employers can harness the power of AI while minimizing its potential pitfalls. As AI continues to evolve, staying informed and proactive will be key to leveraging its benefits responsibly.

For further information on this topic, please contact the authors or another member of the Employment, Labour & Equalities Group.


[1] For provincially-regulated employers in the private sector, there is privacy legislation in Alberta, British Columbia and Quebec that applies to the collection, use and disclosure of employee personal information. 

[2] See recital 57 of the EU AI Act and Annex 3 to the EU AI Act, which lists certain classes of high-risk AI systems including those pertaining to recruitment and selection of natural persons and decision-making for promotions, termination of working relationships, task allocation and evaluating behaviour and performance.

[3] Lacey et al v State Farm General Insurance Co. decision dated 6 May 2025 at paragraph 1.

[4]at paragraphs 17, 19. See also paragraph 26: “Directly put, Plaintiff’s use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”Ibid

[5], Ko v Li 2025 ONSC 2766 (dated May 6, 2025).

[6] at paragraph 8.Ibid