Alycia Riley
Associate
Article
Artificial intelligence is revolutionizing how we work—reshaping everything from routine tasks to strategic decision-making across all levels of an organization. And while the potential benefits are vast, they come hand-in-hand with equally significant challenges. As excitement around AI’s capabilities continues to grow, so too must our awareness of its limitations.
This isn’t just about waiting for regulators to catch up. Employers are already facing real risks, from flawed AI outputs and “hallucinations” to privacy violations and compliance gaps. The practical and legal implications are mounting and organizations that don’t proactively address them could be exposed to liability and reputational damage.
This article explores the risks associated with AI in the workplace and offers practical advice for employers to mitigate these risks.
Canadian employers navigating the use of AI must do so within a growing patchwork of legal and regulatory frameworks. Beyond existing privacy laws,[1] AI raises complex issues around data protection, intellectual property, employment standards and transparency.
With provincial, federal and international regulators each taking a slightly varied approach, employers need to stay attuned to the direction of regulation in the jurisdictions where they operate. Staying compliant isn’t just a legal necessity—it’s a strategic imperative for reducing risk and ensuring responsible AI implementation.
The provinces of Alberta, British Columbia and Québec have privacy laws that apply to employment relationships in the private sector.
In Québec, this includes an obligation to inform an individual if using their personal information to render a decision based solely on automated processing of that information. The individual also has a right to know about the information used in making the decision, the principal factors and parameters that led to it, and to have their personal information corrected. There is also a right to make observations to an employee who may review the decision.
While Ontario does not have comparable legislation, 2022 amendments to the Employment Standards Act, 2000 (ESA) require certain employers to implement an electronic monitoring policy, which applies to “all forms of employee and assignment employee monitoring that is done electronically.” As of January 1, 2026, certain Ontario employers will also be required to disclose the use of AI systems during the hiring process in publicly-advertised job postings.
More recently, the Strengthening Cyber Security and Building Trust in the Public Sector Act came into force on January 29, 2025. This act introduced the Enhancing Digital Security and Trust Act, 2024, which creates significant new obligations regarding privacy, cyber security and the use of AI in the Ontario public sector.
Even in the absence of legislation, there is a wealth of guidance regarding employee privacy rights in labour arbitration jurisprudence, which will have a ripple effect on the implementation of AI in workplaces.
Currently, there is no federal legislation that regulates the use of AI in the commercial or employment context, though the Personal Information Protection and Electronic Documents Act continues to govern federally-regulated employers’ collection of personal employee information.
The former Bill C-27 p sweeping changes to modernize private sector privacy legislation, including the enactment of the Artificial Intelligence and Data Act (AIDA), a topic of extensive discussion by the Standing Committee on Industry and Technology. While Bill C-27 died with the prorogation of Parliament earlier this year, Prime Minister Mark Carney has expressed his government’s intent to keep AI regulation top of mind and has appointed the first-ever AI federal minister.
The European Union continues to be at the forefront of regulation with the coming into force of the risk-based Artificial Intelligence Act (EU AI Act) in August 2024.
[2]The EU AI Act applies to public and private actors (both inside and outside the EU) where an AI system, as defined therein, is available in the EU market or its use has an impact on individuals located in the EU. Use of AI systems where there are employment implications constitute “high-risk” systems since those systems may appreciably impact future career prospects and livelihoods of these persons.
When implementing AI in the workplace for employees to use in their work, it is important to remember that AI has its practical limitations, such as hallucinations which occur when AI systems generate incorrect or nonsensical information. These errors can occur due to various factors, including inadequate training data, algorithmic biases or system malfunctions. In the workplace, hallucinations can lead to misinformation and flawed decision-making. In the broader context, such errors can create a lack of confidence in work product and even reputational damage to the professional or business.
As AI use increases in the workplace, so too have instances of cautionary tales resulting from poor output and lack of user care and diligence.
A California decision dated May 5, 2025 issued certain sanctions against two law firms that submitted briefs containing “bogus AI-generated research.”[3] The summary of ruling makes clear that the Special Master appointed for this case considered the actions of plaintiff’s counsel constituted bad faith and that counsel was reckless in (1) failing to disclose the use of AI at the outset, (2) failing to cite-check the original brief, and (3) re-submitting a defective, revised brief without adequate disclosure of the use of AI.[4]
Just one day later, another example of AI misuse occurred when a Toronto judge found the applicant’s law firm used AI to generate legal argument based on fake cases.[5] Interestingly, the lawyer indicated that her firm does not typically use AI but that she would “have to check” with her clerk.[6]
Of course, such errors are not limited to the legal profession; they can affect any industry where AI is used to generate content or make decisions. These incidents highlight the dangers of relying upon AI without proper verification and underscore the importance of human oversight.
The issues created by these cautionary tales are capable of being resolved, or at least reduced, with proper policies and procedures. Employers should consider the following.
In all case, employers must ensure that their proposed AI use is compliant with employee privacy rights, applicable laws and their own policies.
This will vary with sector and jurisdiction but may entail performance of a privacy risk assessment that considers, among other things, the purposes of using AI, whether AI will effectively meet those purposes, whether less privacy invasive means are available to achieve those purposes, and whether the loss of privacy is proportional to the resultant benefits.
Employers should be transparent about their use of these tools, clearly communicate their scope and purpose to employees and ensure that use of AI is proportionate, necessary and sufficiently accurate/reliable in achieving the objective.
Employers should also consider whether their intended use of AI impacts other policies, such as their response protocol when an individual requests access to their personal information.
While AI may offer significant advantages to Canadian workplaces, it also presents significant risks that employers should manage carefully.
By implementing robust verification processes, investing in employee training, and adhering to legal and ethical standards, employers can harness the power of AI while minimizing its potential pitfalls. As AI continues to evolve, staying informed and proactive will be key to leveraging its benefits responsibly.
For further information on this topic, please contact the authors or another member of the Employment, Labour & Equalities Group.
[1] For provincially-regulated employers in the private sector, there is privacy legislation in Alberta, British Columbia and Quebec that applies to the collection, use and disclosure of employee personal information.
[2] See recital 57 of the EU AI Act and Annex 3 to the EU AI Act, which lists certain classes of high-risk AI systems including those pertaining to recruitment and selection of natural persons and decision-making for promotions, termination of working relationships, task allocation and evaluating behaviour and performance.
[3] Lacey et al v State Farm General Insurance Co. decision dated 6 May 2025 at paragraph 1.
[4]at paragraphs 17, 19. See also paragraph 26: “Directly put, Plaintiff’s use of AI affirmatively misled me. I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.”Ibid
[5], Ko v Li 2025 ONSC 2766 (dated May 6, 2025).
[6] at paragraph 8.Ibid
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.