
Guides
Guide to Healthcare AI 2025: Legal framework, trends & developments
Originally part of a global practice guide by Chambers and Partners, this section on Healthcare AI in Canada has been republished with permission.
The Chambers Healthcare AI 2025 Guide covers the latest legal information on the use of artificial intelligence in healthcare across the Asia-Pacific region, Europe and North America, and provides up-to-date commentary and analysis on regulatory oversight, liability and risk, ethics and governance, data privacy, and IP issues for healthcare AI developers and users.
Gowling WLG lawyers Taryn C. Burnett, KC, Vanessa Carroll, Martin Lapner, Andrew W. McKenna, Marc Richard, Caitlin Schropp, Robert Sheahan, and Wendy J. Wagner are contributing authors for the sections on "Law and Practice" and "Trending Developments" for the Canadian jurisdiction.
NOTE: The information and summaries in the guide are not provided as legal advice and should not be relied upon as such. Readers should consult the contributing authors or other qualified legal and non-legal advisers directly if they need to further understand what rules and practices might apply in particular situations and jurisdictions.
Law and Practice
1. Use of Healthcare AI
1.1 Types and Applications of Healthcare AI
The adoption of AI in healthcare has accelerated dramatically in recent years, transforming both clinical and operational aspects of patient care. AI systems are most commonly being used currently to assist clinical diagnostics, optimise patient treatment plans, and reduce healthcare professionals' workloads. By expediting processes and enabling earlier disease detection and diagnosis, AI holds significant promise for improving public health outcomes.
Healthcare providers analyse and interpret large amounts of complex data during the diagnosis process, which can lead to cognitive fatigue. AI tools can assist in interpreting data and reaching clinical decisions with greater efficiency and reduced mental strain.
AI is also enhancing patient care by enabling remote health monitoring outside traditional clinical settings. Remote monitoring pairs biosensors with analytics to identify patterns and predict potential health risks earlier. In both inpatient and outpatient contexts, AI systems are used by healthcare providers in determining optimal medications, dosages, and treatment plans.
Current AI applications can also streamline routine tasks for healthcare providers. Some AI-powered notetaking systems automatically generate clinical reports from patient conversations, reducing paperwork and allowing healthcare providers to focus on facilitating meaningful patient interactions. Operational AI is also used to streamline documentation and patient flow, including in hospital emergency departments.
1.2 Key Benefits and Challenges
AI systems are used to enhance the speed and quality of patient care while reducing physical and cognitive workloads.
AI systems can assist with: clinical decision-making which increases the efficiency and quality of patient care. Peer-reviewed studies report gains in diagnostic accuracy for selected use cases. AI systems can also assist with administrative tasks and reduce workloads that contribute to cognitive fatigue. They can also process and compare large amounts of data without being affected by fatigue, emotion or memory.
Despite the benefits of AI systems in healthcare, some unique challenges arise with the novel technology. AI systems require a large amount of high-quality data to accurately build their algorithm. When data is imprecise, unvalidated, unreliable or incorrect, it can impact the integrity of the data it generates. An AI system is only as good as the data used to build it, and not every clinical specialty has large amounts of high-quality data available. Canadian health data is currently fragmented across jurisdictions, and it is difficult to establish a centralise dataset.
Further, algorithms are at risk of bias if they do not include data from diverse populations. Data may be influenced by human subjectivity and repeat inequities from discriminatory practices. In addition, the use and storage of large data sets are vulnerable to security threats and confidentiality concerns. From 2015-2023, there were at least 14 reported major cyber-attacks on Canadian hospitals, labs and health networks. There are risks of major data leaks with the use of AI systems in healthcare.
There are also concerns about data sovereignty and control over collection, use and interpretation.
1.3 Market Trends
Major trends in the Canadian AI healthcare sector include the integration of AI into diagnostic imaging (such as radiology and pathology), the deployment of predictive analytics for early warning and patient risk stratification, and the widespread adoption of AI-powered documentation tools to reduce administrative burden.
Innovation and adoption are being driven by a diverse set of stakeholders. The government of Canada has invested in AI tools aimed to improve the Canadian Healthcare system. For example, in June 2025, Canada Health Infoway employed a federally funded programme which gave 10,000 primary care clinicians across Canada AI Scribe licences. The federal government also provided funding of CAD60 million in the 2021 budget to support the Pan-Canadian Artificial Intelligence Strategy in 2017 to promote collaboration between provincial AI hubs. On 24 September 2025, the federal government announced the creation of a task force on AI that will recommend policies to improve research, talent development, adoption and commercialisation of AI in Canada.
The Royal College of Physicians and Surgeons of Canada, the Canadian Medical Association and Canada's Drug Agency all acknowledge that AI will be an important aspect of patient care in the future. They have advocated for initiatives such as implementing AI and digital technologies into residency training and healthcare delivery.
2. Legal Framework for Healthcare AI
3. Regulatory Oversight of Healthcare AI
4. Liability and Risk in Healthcare AI
5. Ethical and Governance Considerations for Healthcare AI
6. Data Governance in Healthcare AI
7. Intellectual Property Issues Regarding Healthcare AI
8. Specific Applications of Healthcare AI
9. Future Trends and Regulatory Developments in Healthcare AI
10. Practical Considerations in Healthcare AI
Trends and Developments
AI in Canadian Healthcare
As artificial intelligence (AI), including the rapid rise of generative AI (GAI), becomes more embedded in Canadian healthcare – encompassing medical diagnoses, virtual nursing assistants, medication management, robotic surgery and healthcare data management, to name a few – clear sector-specific regulation remains a work in progress.
The Regulation of AI
The principal federal proposal, the Artificial Intelligence and Data Act (AIDA), died on the Order Paper when Parliament was prorogued in January 2025, and no successor bill has yet been introduced. Provinces continue to rely on existing statutes, guidance documents and voluntary codes to guide the use of AI in healthcare.
Federal Landscape
The Government of Canada published its Digital Charter in 2019 and followed up with Bill C-27, the Digital Charter Implementation Act of 2022. Although Bill C-27 passed second reading, significant criticism was levied at its reliance on future regulations and its limited sectoral tailoring, which led to delays at the committee stage, and the Bill ultimately died on the Order Paper when Parliament was prorogued.
In this legislative vacuum, the federal government announced a series of initiatives to support responsible and safe AI adoption, including a refreshed membership of the Advisory Council on AI, establishment of a Safe and Secure AI Advisory Group, release of the Guide for Managers of AI Systems applicable to federal institutions, and expansion of signatories to the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
Health Canada continues to regulate many clinical AI tools as software as a medical device (SaMD) under the Medical Devices Regulations. Using the International Medical Device Regulators Forum risk classification, the department mandates more rigorous evidence and post‑market surveillance for software whose malfunction could directly compromise patient safety. In February 2025, Health Canada issued its Pre‑market Guidance for Machine‑Learning‑Enabled Medical Devices, detailing expectations for algorithm change protocols, transparency, and cybersecurity measures.
Software that is limited to administrative functions remains exempt, as do applications that merely support, rather than supplant, clinical judgment.
Provincial Initiatives
Provincial legislation applicable to AI in healthcare generally remains in the early stages, with many provinces relying on existing frameworks, such as privacy laws and healthcare regulations, to address AI-related concerns.
Some provinces have taken steps to modernise legislation and specifically contemplate AI. In Ontario, Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024, received Royal Assent on November 25, 2024. The statute empowers future regulations that will require public sector entities, including hospitals, to disclose their use of AI, implement accountability frameworks, adopt risk management measures, and adhere to prescribed technical standards. In prescribed circumstances, institutions may be required to ensure an individual provides oversight of AI use.
In Québec, An Act respecting the protection of personal information in the private sector (applicable to the private sector), An Act respecting Access to documents held by public bodies and the Protection of personal information (relevant to the public sector), and the Act respecting health and social services information (applicable to healthcare organisations) require organisations to notify individuals of automated decisions, disclose the personal data and principal factors relied upon, and provide a right to human review.
Professional Regulatory Guidance
Canadian health professional regulators have released preliminary, high-level guidance regarding the use of AI, emphasising that AI must augment rather than replace professional judgment. The guidance consistently urges caution when using AI with three dominant themes:
- ensuring the work product is accurate;
- protecting client/patient privacy; and
- establishing accountability for the use of technology by professionals.
Several regulators apply their broader standards surrounding technology to AI, requiring practitioners to carefully evaluate, apply, and adapt technology in ways that prioritise and protect patient interests (eg, ensuring the use of reputable AI systems and continuing to assess electronic evaluations to identify any inadequate or erroneous results). Other Associations remind healthcare providers of the importance of understanding patients’ comfort and access to emerging AI tools before recommending them, and implementing safeguards to protect patient privacy and avoid conflicts of interest.
While most regulators do not prohibit registrants from using AI, many expressly warn against substituting computer-generated assessments, reports, or statements for the professional opinion of a healthcare provider.
AI and Civil Liability
Determining liability in cases involving the use of AI in healthcare remains complex and uncertain, as legal frameworks adapt to both rapidly evolving technologies and the shifting dynamics of human and AI-supported decision-making.
The introduction of AI in hospital settings may, for example, require institutions to develop protocols for the appropriate selection, implementation, training, maintenance, and inspection of such technologies, and to ensure that staff are appropriately qualified to use the applications. Developers and vendors may be expected to take reasonable care in the development of AI tools and to warn of limitations and risks.
It is challenging to predict how courts may assess healthcare providers’ use of AI, particularly given the evolving nature of these technologies and inconsistent adoption, guidance, and practices. Claims involving the use of software (other than AI) may provide some insight into the potential consideration of AI use in healthcare. These cases, coupled with existing liability principles, mean that individual healthcare providers, institutions, developers of AI systems, and vendors may find themselves defending new types of AI claims relating to the negligent design, implementation, or use of an AI tool.
Looking ahead, the use of AI systems is likely to result in an increasing number of defendants in legal actions, extending beyond traditional healthcare providers to include others in the supply chain, such as developers and vendors of AI systems. As algorithms become more autonomous and less susceptible to real-time human override, it may be harder to portray clinicians or hospitals as the principal risk bearers. At the same time, the opacity of AI systems is expected to create challenges for plaintiffs in identifying and proving that a specific act or omission caused them harm, potentially shifting the focus back to more traditional defendants, including how product liability claims will be assessed.
In this uncertain environment, organisations that develop, distribute, or integrate AI should carefully examine their contractual arrangements and proposed reallocations of risk, including through limitations of liability, indemnities, and liability protection.
Privacy and Cybersecurity
The adoption of AI in healthcare raises questions about patient consent, data sharing, and transparency obligations under federal and provincial privacy laws.
When applied in the healthcare context, issues may arise relating to authorisation to use the training dataset for the AI model, collection and use of new data to update or fine-tune the model, use of patient information when interacting with AI, and requirements for consent and/or de-identification of data in each of these cases. These issues must be increasingly investigated, including in the context of a privacy impact assessment prior to implementing an AI tool. There are broad requirements for undertaking privacy impact assessments before implementing systems. For example, in Alberta and Québec, privacy impact assessments are required under health sector-specific privacy legislation.
Privacy legislation is also beginning to impose additional obligations with respect to transparency when AI makes or recommends a particular decision, as well as the right of individuals to request a human decision-maker.
Regulatory Initiatives and Investigations
Federal and provincial Privacy Commissioners have been among the most active in developing expectations for the use of AI, including in the healthcare sector. While they do not directly regulate AI as a whole, they play a key role in ensuring that the use of AI systems aligns with existing privacy laws (eg, Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial health privacy statutes).
The Office of the Privacy Commissioner of Canada (OPC) published A Regulatory Framework for AI: Recommendations for PIPEDA Reform, which recommends stronger accountability, more explicit rules for automated decision-making, and rights for individuals to challenge AI-driven decisions. For the first time, both provincial and federal Commissioners have engaged in investigating AI-related privacy concerns, including working collaboratively to coordinate AI oversight in a joint investigation.
Algorithmic Bias and Discrimination
There may be unconscious bias and unintentional discrimination in the training data used to develop AI systems, which can extend historic harms in the form of biased output. Academic literature, including a 2024 Stanford-led study, has demonstrated that large language model chatbots can perpetuate debunked, racially biased medical myths. AI tools that recommend discriminatory practices, whether unintentional or not, could form the basis of a human rights claim.
Intellectual Property Uncertainties
Canadian intellectual property statutes have not yet expressly addressed ownership or infringement questions concerning AI-generated content, including its application in healthcare. For example, questions remain regarding the subsistence of copyright, authorship attribution, and inventorship for patent-eligible AI outputs. Responsible AI integration in health-related research and medical report generation is required to safeguard against plagiarism and protect intellectual property.
Conclusion
AI promises to revolutionise Canadian healthcare delivery, diagnostics, and resource allocation. While the regulatory landscape continues to evolve, liability and risk management considerations for healthcare providers and organisations, as well as AI vendors and developers, favour a cautious approach.
CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.

