Todd J. Burke
Partner
Member, International Board
Co-Leader, International Arbitration
Article
28
Artificial Intelligence ("AI") is the new marketplace reality. The increase in computing power, improved algorithms and the availability of massive amounts of data are transforming society. According to the International Data Corporation ("IDC"), the AI market is expected to hit $35.8 billion this year, which represents an increase of 44% since 2018.1 IDC has also projected global spending on AI to double by 2022, reaching $79.2 billion.2 In this article, we identify a number of emerging legal issues associated with the use of AI and offer some views on how the law might respond.
AI describes the capacity of a computer to perform the tasks commonly associated with human beings.3 It includes the ability to review, discern meaning, generalize, learn from past experience and find patterns and relations to respond dynamically to changing situations.4
In 2017, Accenture Research and Frontier Economics conducted research comparing the economic growth rates of 16 industries and projecting the impact of AI on global economic growth. The report concluded that AI has the potential to boost profitability an average of 38% by 2035 and lead an economic boost of US$14 trillion across 16 industries in 12 economies by 2035.5
The promise of AI is better decision-making and enhanced experiences. In their book Machine, Platform, Crowd, MIT professors Andrew McAfee and Erik Brynjolfsson write "[t]he evidence is overwhelming that, whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even experienced and "expert" humans."6 The fear is that AI in an unregulated environment will lead to a loss of human supervisory control and unfortunate outcomes.
Commentators have recognized that the preponderance of AI will raise new and important legal and ethical questions. Some have identified the need for AI ethicists to help navigate where this technological advance might take us.7
In October 2016, the British House of Commons published a report on Robotics and Artificial Intelligence, which highlighted certain ethical and legal issues including transparent decision-making, minimising bias, privacy and accountability.8 On December 18, 2018, the European Commission's High-Level Expert Group on Artificial Intelligence ("AI HLEG") released the first draft of the Draft Ethics Guidelines for Trustworthy AI.9 Pursuant to the guidelines, Trustworthy AI requires an ethical purpose and technical robustness:2
In Canada, the Treasury Board Secretariat of Canada (the "Board") is looking at issues around the responsible use of AI in government programs and services.12 On March 2, 2019, the Board released a Directive on Automated Decision-Making, which takes effect on April 1, 2019, to ensure that AI driven decision-making is compatible with core administrative law principles such as transparency, accountability, legality, and procedural fairness.13
To grasp an understanding of the legal aspects of AI, one of the central questions will be how the law will evolve in response to AI. Will it be through the imposition of new laws and regulation or will it be through the time-honoured tradition of having our courts develop new laws by applying existing laws to new scenarios precipitated by technological change?
AI has already been used and accepted in a number of US decisions. In Washington v Emanuel Fair, the defence in a criminal proceeding sought to exclude the results of a genotyping software program that analysed complex DNA mixtures based on AI while at the same time asking that its source code be disclosed. 14 The Court accepted the use of the software and concluded that a number of other states had validated the use of the program without having access to its source code.15 In State v Loomis, the Wisconsin Supreme Court held that a trial judge's use of an algorithmic risk assessment software in sentencing did not violate the accused's due process rights, even though the methodology used to produce the assessment was neither disclosed to the accused nor to the court.16
In Canada, litigation involving AI is in its early stages. In 2018, the Globe and Mail reported that a lawsuit involving an AI system had been commenced in Quebec.17 Adam Basanta created a computer system that operates on its own and produces a series of randomly generated abstract pictures.18 Mr. Basanta was now being sued in Quebec Superior Court for trademark infringement because of an image created by the system.19 Amel Chamandy, owner of Montreal's Galerie NuEdge, claimed that a single image from Mr. Basanta's project All We'd Ever Need Is One Another violated the copyright on her photographic work Your World Without Paper (2009) and the trademark she owns associated with her name.20
AI is also being utilized to render judicial decisions. In Argentina, AI is being used to assist district attorneys in writing decisions in less complex cases such as taxi licence disputes that presiding judges can either approve, reject or rewrite.21 Using the district attorneys' digital library of 2,000 rulings from 2016 to 2017, the AI program matches cases to the most relevant decisions in the database, which enables it to guess how the court will rule.22 Thus far, judges have approved all of the suggested rulings—33 in total.23
The volume and relativity of data collection will keep privacy at the forefront as one of the most significant legal issues that AI users will face going forward. AI systems use vast amounts of data; therefore, as more data is used more questions are raised. Who owns the data shared between AI developers and users? Can data be sold? Should this shared data be de-identified to protect privacy concerns? Is the intended use of data appropriately disclosed and compliant with legislation such as the Personal Information Protection and Electronic Documents Act ("PIPEDA")?
Governments now are updating their privacy legislation to respond to privacy concerns fueled by the public outcry against massive data breaches and the unfettered use of data by large companies. Consumers have become increasingly concerned with the potential misuse of their personal information. In 2015, the European Commission conducted a survey carried out in 28 member states of the European Union that demonstrated that roughly seven out of 10 people expressed concern about their information being used for a different purpose than the one for which it was collected.24
The EU and international regulators have taken an active interest in AI, not only recognizing its benefits but also being mindful of potential risks and unintended consequences.25 The European Parliament enacted the General Data Protection Regulation ("GDPR"), which is a comprehensive set of rules designed to keep the personal data of all EU citizens collected by any organization safe from unauthorized access or use.26 Under the GDPR, companies must be clear and concise about their collection and use of personal data, and indicate why the data is being collected and whether it will be used to create profiles of people's actions and habits. In other words, organizations must be transparent about the type of information they collect about consumers and how this information will be used. Critics contend that the GDPR could present an obstacle to developers looking to design more complex and sophisticated algorithms.28
Unlike the EU, US federal lawmakers have yet to establish regulations to govern the use of personal information in the AI world.29 Sensing the inevitability of data regulation, some large American companies like Apple are encouraging the introduction of regulation in the United States.30 On January 18, 2019, Accenture released a report outlining a framework to assist US federal agencies to evaluate, deploy and monitor AI systems.
Canada has yet to adopt regulations the likes of the GDPR. However, the new federal mandatory data breach notification regulations that came into force on November 1, 2018, were drafted with a view to harmonize the requirements of the GDPR to the extent possible.32 The Breach of Security Safeguards Regulations under PIPEDA set forth certain mandatory requirements for organizations applicable in the event of a data breach.33 PIPEDA defines a breach of security safeguards as "the loss of, unauthorized access to or unauthorized disclosure of personal information resulting from a breach of an organization's security safeguards."34 Should a breach of security safeguards occur, organizations are required to do the following: report data breaches to the Office of the Privacy Commissioner of Canada, keep and maintain a record of every breach of safeguards involving personal information under their control, and provide the records of the breach to the Commissioner upon request. Organizations will need to not only evaluate their compliance in terms of privacy legislation, but also ensure that their data handling practices are sufficiently secure to prevent cyber security breaches.
The inherent nature of AI may require individuals or entities contracting for AI services to seek out specific contractual protections. In the past, software would literally perform as it was promised. Machine learning however is not static, but is constantly evolving. As noted by McAfee and Brynjolfsson, "[m]achine learning systems get better as they get bigger, run on faster and more specialized hardware, gain access to more data, and contain improved algorithms."36 The more data algorithms consume, the better they become at spotting patterns.37
Parties might consider contractual provisions, which covenant that the technology will operate as intended, and that if unwanted outcomes result then contractual remedies will follow. These additional provisions might include an emphasis on audit rights with respect to algorithms within AI contracts, appropriate service levels in the contract, a determination of the ownership of improvements created by AI, and indemnity provisions in the case of malfunction. AI will dictate a more creative approach to contracts where drafters will be forced to anticipate where machine learning might lead.
Machine learning constantly evolves, making more complex decisions based on the data it operates on. While most outcomes are anticipated, there is the distinct possibility of an unanticipated or adverse outcome given the absence of human supervision. The automated and artificial nature of AI raises new considerations around the determination of liability. Tort law has traditionally been the mechanism used in the law to address changes in society, including technological advances. In the past, the courts have applied the established analytical framework of tort law and have applied those legal principles to the facts as they are presented before the court.
We start the tort analysis with the following questions: Who is responsible? Who should bear liability? In the case of AI, is it the programmer or developer? Is it the user? Or is it the technology itself? What changes might we see to the standard of care or the principles of negligent design? As the AI evolves and makes its own decision, should it be considered an agent of the developer and if so, is the developer vicariously liable for the decisions made by the AI that result in negligence?
The most common tort—being the tort of negligence—focuses on whether a party has a duty of care to another, whether the party has breached the standard of care, and whether damages have been caused by that breach. Reasonable foreseeability is a central concept in negligence. Specifically, the test is whether a reasonable person is able to predict or expect the general consequences that would result because of his or her conduct, without the benefit of hindsight. The further that AI systems move away from classical algorithms and coding, then they can display behaviours that are not just unforeseen by their creators but are wholly unforeseeable. When there is a lack of foreseeability, are we placed in a position where no one is liable for a result, which may have a damaging effect on others? One would anticipate that our courts would respond to prevent such a result.
In a scenario where there is a lack foreseeability, the law might replace its analysis based on negligence to one based on strict liability. The doctrine of strict liability also known as the rule in Rylands v Fletcher provides that a defendant will still be held legally responsible when neither an intentional nor a negligent act has been found and it is only proven that the defendant's act resulted in injury to the plaintiff.
Should a negligence analysis remain, then the standard of care requirements will need to be redefined in an AI context. Some of the following questions will be central to the court's consideration:
One can envisage a growth industry in negligence actions against software development companies and programmers.
Product liability is another arm of tort law that may take on more significance when looking at liability should AI become defective. Under the common law, product liability focuses on negligent design, negligent manufacture and breach of the duty to warn. It generally addresses the liability of one or more parties involved in the manufacture, sale or distribution of a product.39 For this doctrine to apply, the AI system in question must qualify as a product, and not a service.40 Ascertaining where the defect occurred in the supply chain of an AI product may be difficult given the autonomous and evolving nature of machine learning and algorithms. Commentators have noted that product liability will become relevant with respect to issues arising from the use of autonomous vehicles, robots and other mobile AI-enabled systems.41
Companies like Microsoft and Google have recognized that offering AI solutions that raise ethical, technological and legal challenges may expose them to reputational harm.42 The issues of bias and/or discrimination have become more prevalent as more companies and governmental entities turn to AI systems in their decision-making processes. For example, a 2016 investigation by ProPublica revealed that a number of US cities and states used an algorithm to assist with making bail decisions that was twice as likely to falsely label black prisoners as being at high-risk of re-offending than white prisoners.43
To mitigate against built-in biases in collected data and in the decision-making process, a number of companies have developed bias-detection algorithms. Accenture developed a tool that enables companies to identify and eliminate gender, racial and ethnic bias in their AI software.44 IBM's OpenScale is an AI platform that provides the ability to explain how AI decisions are made in real time to ensure transparency and compliance, which may also have relevance to the definition of the standard of care.45 However, these solutions may not necessarily solve the problem. A senior researcher at Microsoft acknowledged that "[i]f we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases." 46
Setting ethical parameters within which AI systems will operate is paramount to addressing the issue of bias. Regulating AI will not be an easy feat. Given that AI is constantly evolving, any ethical regulation concerning the use of AI must also continually evolve to remain relevant to the technology.
AI will continue to develop and to disrupt society in ways that we cannot yet imagine. It is challenging to keep pace with the speed of developments with which AI systems are being deployed. One developer recently described it like "a sort of peanut butter you can spread" across multiple disciplines and industries.47 As the peanut butter is spread, organizations must prepare not only for the positive but also for the unintended and likely unfortunate negative consequences technology like this will bring. It is largely unknown how the law will react to this new reality but anticipating what those impacts might be is a timely first step.
Sources
1. International Data Corporation, "Worldwide Spending on Artificial Intelligence Systems Will Grow to Nearly $35.8 Billion in 2019, According to New IDC Spending Guide" (11 March 2019), online: https://www.idc.com/getdoc.jsp?containerId=prUS44911419
2. Ibid.
3. B.J. Copeland, "Artificial intelligence" (17 August 2018), Encyclopedia Britannica, online: https://www.britannica.com/technology/artificial-intelligence
4. Ibid.
5. Mark Purdy & Paul Daugherty, "How AI boosts Industry Profits and Innovation" (2017), Accenture, online: https://www.accenture.com/ca-en/insight-ai-industry-growth
6. Andrew McAfee & Erik Brynjolfsson, Machine, Platform, Crowd: Harnessing Our Digital Future (New York: W.W. Norton & Company, 2017) at 34 [McAfee & Brynjolfsson].
7. John Murawski, "Need for AI Ethicists Becomes Clearer as Companies Admit Tech's Flaws" (1 March 2019), the Wall Street Journal, online: https://www.wsj.com/articles/need-for-ai-ethicists-becomes-clearer-as-companies-admit-techs-flaws-11551436200 [Murawski]
8. House of Commons Science and Technology Committee, "Robotics and artificial intelligence" (12 October 2016), online: https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf
9. European Commission, "Have your say: European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence" (18 December 2018), online: https://ec.europa.eu/digital-single-market/en/news/have-your-say-european-expert-group-seeks-feedback-draft-ethics-guidelines-trustworthy
10. High-Level Expert Group on Artificial Intelligence, "Draft Ethics Guidelines for Trustworthy AI: Working Document for Stakeholders' Consultation" (18 December 2018), European Commission, online: https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai. At page 17, robustness is defined as follows: "Trustworthy AI requires that algorithms are secure, reliable as well as robust enough to deal with errors or inconsistencies during the design, development, execution, deployment and use phase of the AI system, and to adequately cope with erroneous outcomes."
11. Ibid.
12. Government of Canada, "Responsible use of artificial Intelligence (AI)" (5 March 2019), online: https://www.canada.ca/en/government/system/digital-government/responsible-use-ai.html
13. Government of Canada, "Directive on Automated Decision-Making" (
14. Cybergenetics, "Seattle judge rules on TrueAllele admissibility and source code" (12 January 2017), online: https://www.cybgen.com/information/newsroom/2017/jan/Seattle-judge-rules-on-TrueAllele-admissibility-and-source-code.shtml
15. Ibid.
16. Harvard Law Review, "State v Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing" (2017)
130 Harv L Rev 1530, online: https://harvardlawreview.org/2017/03/state-v-loomis/
17. Chris Hannay, "Artist faces lawsuit over computer system that creates randomly generated images" (4 October 2018), The Globe and Mail, online: https://www.theglobeandmail.com/arts/art-and-architecture/article-artist-faces-lawsuit-over-computer-system-that-creates-randomly/
18. Ibid.
19. Ibid.
20. Ibid.
21. Patrick Gillespie, "When AI writes the Court Ruling" (29 October 2018), Bloomberg Businessweek.
22. Ibid.
23. Ibid.
24. Patrick Mäder, Dr. Christian B. Westermann & Dr. Karin Tremp, "Analytics in Insurance: Balancing Innovation and Customers' Trust" (February 2018), PWC at 13, online: https://www.pwc.ch/de/press-room/expert-articles/pwc_press_20180709_hsgtrendmonitor_maeder_westermann_tremp.pdf
25. Deloitte, "AI and risk management" (2018), at 1, online: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Financial-Services /deloitte-gx-ai-and-risk-management.pdf
26. Tech Pro Research, "EU General Data Protection Regulation (GDPR) policy" (February 2018), online: http://www.techproresearch.com/downloads/eu-general-data-protection-regulation-gdpr-policy/
27. Nitasha Tiku, "Europe's New Privacy Law will change the Web, and More" (19 March 2018), Wired, online: https://www.wired.com/story/europes-new-privacy-law-will-change-the-web-and-more/
28. Silla Brush, "EU's Data Privacy Law Places AI Use in Insurance Under Closer Scrutiny" (22 May 2018), The Insurance Journal, online: https://www.insurancejournal.com/news/international/2018/05/22/489995.htm
29. John Murawski, "U.S. Push for AI Supremacy Will Drive Demand for Accountability, Trust" (20 March 2019), Wall Street Journal, online: https://www.wsj.com/articles/u-s-push-for-ai-supremacy-will-drive-demand-for-accountability-trust-11553074200
30. Mike Allen and Ina Fried, "Apple CEO Tim Cook calls new regulations "inevitable"' (18 November 2018), Axios, online: https://www.axios.com/axios-on-hbo-tim-cook-interview-apple-regulation-6a35ff64-75a3-4e91-986c-f281c0615ac2.html
31. Accenture, "Responsible AI for federal agencies" (18 January 2018), online: https://www.accenture.com/us-en/insights/us-federal-government/responsible-ai-federal-agencies
32. Government of Canada, "Breach of Security Safeguards Regulations: SOR/2018-64" (27 March 2018), online: http://gazette.gc.ca/rp-pr/p2/2018/2018-04-18/html/sor-dors64-eng.html
33. Josh O'Kane, "Federal government debuts data-breach reporting rules" (18 April 2018), The Globe and Mail, online: https://www.theglobeandmail.com/business/article-federal-government-debuts-data-breach-reporting-rules/
34. Personal Information Protection and Electronic Documents Act, SC 2000, c 5, s 2(1).
35. Government of Canada, "What you need to know about mandatory reporting of breaches of security safeguards" (29 October 2018), online: https://www.priv.gc.ca/en/privacy-topics/privacy-breaches/respond-to-a-privacy-breach-at-your-business/gd_pb_201810/
36. McAfee & Brynjolfsson, supra note 6 at 85.
37. David Meyer, "A strict regulatory regime may promote public confidence in the use of technology but it might also be seen as an impediment to innovation and progress" (25 May 2018), Fortune, online: http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/
38. CED 4th (online), Torts, "Principles of Liability: Standard of Liability: Strict Liability" (II.1.(c)) at §18.
39. Woodrow Barfield, "Liability for autonomous and artificially intelligent robots" (2018) 9 De Gruyter 193 at 196, online: https://www.degruyter.com/downloadpdf/j/pjbr.2018.9.issue-1/pjbr-2018-0018/pjbr-2018-0018.pdf.
40. Ibid at 197.
41. Richard Kemp, "Legal Aspects of Artificial Intelligence (v2.0)" (September 2018), Kemp It Law at 31, online: http://www.kempitlaw.com/wp-content/uploads/2018/09/Legal-Aspects-of-AI-Kemp-IT-Law-v2.0-Sep-2018.pdf
42. Murawski, supra note 7.
43. Jeremy Kahn, "Accenture Unveils Tool to Help Companies Insure Their AI Is Fair" (13 June 2018), Bloomberg, online: https://www.bloomberg.com/news/articles/2018-06-13/accenture-unveils-tool-to-help-companies-insure-their-ai-is-fair
44. Jeremy Kahn, "Accenture Unveils Tool to Help Companies Insure Their AI Is Fair" (13 June 2018), Bloomberg, online: https://www.bloomberg.com/news/articles/2018-06-13/accenture-unveils-tool-to-help-companies-insure-their-ai-is-fair
45. IBM, "IBM Watson Now Available Anywhere" (12 February 2019), online: https://newsroom.ibm.com/2019-02-12-IBM-Watson-Now-Available-Anywhere
46. Dave Gershgorn, "Microsoft warned investors that biased or flawed AI could hurt the company's image" (5 February 2019), Quartz, online: https://qz.com/1542377/microsoft-warned-investors-that-biased-or-flawed-ai-could-hurt-the-companys-image/
47. Spencer Bailey, "Designed by A.I.: Your Next Couch, Sweater, and Set of Golf Clubs" (15 February 2019), Fortune, online: http://fortune.com/2019/02/15/artificial-intelligence-ai-design/
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.