Naïm Alexandre Antaki
Partner
Co-leader of the Canadian Artificial Intelligence Group | Leader of the Corporate Commercial Department - Montréal
On-demand webinar
CPD/CLE:
WENDY WAGNER: All right, we'll get started. Good afternoon, everyone. And welcome to the first in a series of sessions that we're going to have on AI, AI on the Horizon, as we've entitled it. And I just want to make a quick announcement that our second session will be at the end of February. And it will be AI and cyber security, so very interesting topic. And watch for the invite on that, and the announcement, and more information.
So it's great to see so many faces in the room. And I know there's a lot of people attending virtually as well. So obviously, a huge topic of interest for everyone as we kick off 2025. And of course, I'm sure the draw that we got here was largely attributable to our good fortune in securing Mark Schaan as our esteemed speaker.
And just by way of introduction, Mark is currently at the Privy Council Office. He's serving as Deputy Secretary to the Cabinet, on artificial intelligence. And that's a position he was appointed to in July of 2024. And in that role, he supports the overall AI agenda, seeking to position Canada for leadership in the responsible development and use of AI.
Mark was formerly at Innovation, Science and Economic Development Canada, ISED. And he led the AI portfolio there. And he was intimately involved in regulatory initiatives, including development of the Artificial Intelligence and Data Act, AIDA, that we'll be speaking about today. And his public service career has really spanned the policy spectrum. He's been involved in telecommunications portfolio, investment review, intellectual property, bankruptcy and insolvency, privacy and AI.
So as you can see, he knows very little about anything. He knows about a lot. And he was actually a Rhodes scholar at the University of Oxford and obtained a doctorate of philosophy in social policy there. But more importantly, he's a University of Waterloo alumni as well. And I'm sure they are very proud to claim that.
So our session for today, for the most part, will be a dialogue with Mark on the state of AI regulation in Canada. And obviously, that's very timely and interesting in this uncertain environment for regulatory initiatives, and for politics, and for the economy, and it seems like for pretty much everything else.
So I'll also introduce my colleague Naim Antaki. Naim practices business and technology law in both Quebec and Ontario. And along with Chris Lamin in our Toronto office, he co-leads our Gowling's artificial intelligence group. So Naim is obviously steeped in all things AI. And he's going to say a few words on what we're hearing from our clients about AI and the issues they have questions about.
And to introduce myself, I'm Wendy Wagner. I wear a few hats in the firm. But most relevant to this discussion, I'm co-lead of our privacy and data protection groups and cybersecurity group, along with my colleague in Montreal, Antoine Gilman. And Antoine and I are fielding all kinds of questions about AI from our clients.
For better or for worse, it seems to be an area that's fallen to privacy compliance professionals and privacy officers within organizations. And obviously, there's a very important intersect between use of data and lawful use of data and adoption of AI.
So I'm going to spend a few minutes on AI regulation in Canada and where we're at, just so we can all level set. And then Naim's going to speak a bit about what we're hearing from our clients. And then we hope to have-- we will have a great discussion with Mark and hope to have some time left over for your questions as well.
So starting out with the overview of Canada's Artificial Intelligence and Data Act. What can I say? RIP. [LAUGHS] So our attempt at comprehensive regulation of AI within Canada has died on the order paper with prorogation. I'm sure no one is more disappointed in that than Mark. So we'll hear about that. So it was a part of Bill C-27, which also would have reformed our private sector privacy law, PIPEDA.
And actually, in our invite to the session, there was a link to Gowling's AIDA primer. There was an enormous amount of work put into this. Not only is the content fabulous, it is very graphically pleasing as well. And I'm here to tell you that I think it's still relevant. My colleague Antoine was very instrumental in putting this together with his team and spent countless hours. And I think whatever we see come back for regulation of AI in Canada will obviously draw from what was done within AIDA.
So what was AIDA? It had a goal of establishing common requirements for the design, and development, and use of AI systems, and also to prohibit conduct that would be harmful to individuals associated with use of AI. And an AI system, under the act, was any technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations, or decisions, so a very broad definition of AI system.
And it had some overarching obligations applicable to all AI, including transparency, obligations of transparency, and also offenses such as the use of personal information that was obtained unlawfully, so hacking or things like that, or also causing harm or damage through the use of AI. Then it detailed obligations for general purpose systems. So think of things like ChatGPT and also high impact systems as defined in the act.
So those general purpose systems would have been subject to requirements such as providing a plain language description, requirements that governed use of data within the system, the requirement for impact assessments, privacy impact assessments, measures to mitigate harm caused by the system, measures to give people the ability to be able to identify output that was generated by AI versus humans, and requiring third-party compliance assessments and record-keeping as well.
In terms of the high-risk systems, these were defined in AIDA based on different use cases, so uses within employment to determine employability, or promotions, or things like that, decisions regarding provision of services, so you can think of denial of insurance coverage. Use for biometrics, use for content moderation, which we have heard a lot about recently, use for healthcare, justice applications, and applications within law enforcement were all considered high-risk use cases.
So there were similar obligations imposed on these high-risk systems, as for the general purpose systems. And also, additional obligations where a high risk system uses a machine learning model, so that was another feature of the act. There were certain obligations regarding use of machine learning models, including measures about the data that you could use within the model and also the requirement to make what are called model cards available, so basically, transparency about the algorithm that was used.
And another feature of the act was that for all these regulated AI systems, there would be a prescribed accountability framework. And the enforcement was going to be between ISED and also an artificial intelligence and data commissioner, which would have been a new commissioner role within Canada. So that was AIDA. And we'll see what replaces it and comes back.
It's not the only legislative initiative in Canada. We have had advancements provincially as well. So Ontario Bill 194 is called the Strengthening Cyber Security and Building Trust in the Public Sector Act. It actually did receive royal assent on November 25 of last year and will come into force at a future date when the regulatory framework is developed.
It is only applicable to Ontario's public sector entities, so provincial and municipal, and children's aid societies, and school boards. And one act that it introduces is the Enhancing Digital Security and Trust Act, so EDSTA. And that addresses cyber security, but also use of AI by public bodies.
So it did get some criticism from Ontario's Information and Privacy Commissioner for not being prescriptive in the act itself. Pretty much everything is left to regulation, but it contemplates a transparency framework, an accountability framework, risk measurement measures, all of which will be detailed in regulations.
So one last word on global proliferation of regulations. There are a ton of laws, guidances, frameworks, whether voluntary or mandatory, globally as well. And Mark's going to be talking about a lot of those and some of the common principles.
The EU AI Act entered into force in August of 2024 and has a staged implementation, depending on what type of system is being regulated. It's a bit different from what was contemplated in AIDA. And maybe we can get into a discussion as to why. Because everyone loves-- in global organizations, we love harmonization because we hate to have to do 150 different things. So we'll hear about that. Just foreshadowing some of the really tough, hard-hitting questions I'm going to ask Mark.
So one of the big differences was that it categorizes systems differently, and some AI systems are actually prohibited. And those are the first, that's the first aspect of the EU law to come into force. And then the obligations with respect to AI systems depend on whether it's a minimal risk system or whether it's a higher risk system. And there's different categories within that.
But otherwise, many of the obligations are very consistent. And I'm sure we'll find that as we look at regulation globally. And that's why there is an ability as an organization to develop a framework that will likely hit off most requirements as they are adopted and developed globally.
There's a requirement for a risk management system, data governance is an element of all the legal frameworks and regulatory frameworks, the requirement to have certain types of documentation and make that available, record-keeping, human oversight, accuracy, cyber security. And those elements are all part of that law.
But I think I will stop there and ask Naim to come up and give you a bit of an overview of-- everyone's always interested to know, what is everyone else asking about? What are we giving advice on? What is it? So Naim's going to tell us a little bit about that.
NAIM ANTAKI: Thank you very much, Wendy. This is such an interesting topic. And everyone, all of us have been drinking from the fire hose. And if I had to boil it down to one question of what we're hearing from clients is, are we ready for what's next? And is what's next actually now? Or was it six months ago when everybody else was looking at it, and were late in the game?
And so I think that there's a lot of expectations about what AI can do. And then when you go to implementation, it can become a little bit more cumbersome. And I think the biggest issue if you-- I don't want to say if you forget law-- but if you focus on the business aspect first.
As you know, in Canada, we have, let's say, ground to cover in terms of productivity. And that has been used as an impetus to try to drive forward different technological advances, AIDA is one of them, and different initiatives by the government, beyond regulation.
And so the idea, though, is if we go too fast, we can be caught with liability that we're not aware of. And some of the, I guess, key actors in the area have really emphasized the fact that we all have to do our homework. And all doesn't just mean the lawyers. It doesn't just mean the data scientists. It doesn't just mean the tech people or the business people, but it really is a common effort.
And our firm and our national AI group, of which I'm the co-head, with Chris Alam, here in Toronto, we try to take that approach. How can we help you with what's next? And we do that doing a multidisciplinary approach. And so obviously, privacy is absolutely essential. It's an essential component. And Wendy, Antoine, Jasmine, Chris, and a number of others, Brent, a number of others in our group, nationally, who are here to help on this one.
But it is not a sufficient analysis. And so if you think about intellectual property, we have Selena, who's here, Mark Crandall, Mark Springings, Enrique Sedais, and others across the firm who can help on patents and copyright. Going back to business, and we'll go back to privacy in a second. From an IP standpoint, the key business question is, first, do I have the right to use what I want to use? And number two, is it's still going to be mine?
And I think some of the mistakes that people have done, and it's OK-- there's a new tool, and it's widely available, and it's very easy to use, so people rush to use it-- is to think about, well, who will own the data afterwards? And first, do I have the right even to input the data? Is it mine or is it somebody else's? And what happens afterwards if a mistake was made? So is there a liability on this?
AIDA was trying to focus on high-risk systems. But it is what I would call AI-specific legislation. It doesn't mean that it is the only piece of legislation that exists or that was going to exist. There are a number of AI-related laws that exist in intellectual property, from a liability standpoint, contracts, et cetera, et cetera. Competition is a really important one also.
Going back to business because the questions that people ask us is, well, how well do I have to understand this AI thing? I'm not an AI guru. And the issue is that you actually have to understand it enough so that you can explain it. Because, as Wendy was saying, explainability and transparency are key components. So not only do you have to understand it, but you also have to explain it to people who are going to use it.
And forget compliance for a second. It's very important. But beyond compliance, it's very good for you to be able to do that because it will allow you to then protect yourself in the potential case of a liability. And what we've seen in cases in the past year or past year and a half is that even if AIDA has not been passed, there is already liability for AI errors.
You cannot hide behind the chat box, for example, that you are using. The chat box will get it wrong most of the time. And people at your organization may not know all of this, so you have to work on the compliance aspect. You have to work on the contractual protection aspect. How do you deal with protection with the AI service provider?
For example, you have to work on the employment aspect. And you also have to work beyond law on culture and education, revisiting the assumptions that you're making about AI, which is advancing very rapidly. Revisiting what's going to be next with AIDA is also very important. And so these are all issues that are key to consider.
From a governance standpoint, we've had clients asking us a lot of questions also. We even have, for example, if you're a public company, you may have seen that you have some activist investors who are pushing for the voluntary code relating to AI and to AIDA to still be, I guess, put into place even if AIDA has not yet been enacted.
You also have different norms, standards, and regulations all over the place. And I'm saying all over the place because it's really complicated. At the end of the day, I think we all have to take a deep breath and realize that there are certain common themes that are very important. We talked about transparency, about explainability, about other items, how do you deal with your terms and conditions, et cetera.
And with this, I think that you're able to advance. And it's not just about, oh, let's draft a generative AI policy, which is important, but then we're done. It's a journey. And it's something that I will say business people put a lot of emphasis on and that you need to help them along with the different tools that you have at your disposal, in order to make sure that it's implemented in the right way, without just saying, hey, I don't need to worry about all of this because I'm not using AI in my company.
Number one, you are, you just don't know about it yet. And so it's better to ask the question. And number two is it's a journey where you not only need to think about what you are doing, but you have to think about what your competitors may be doing, and frankly, what nefarious actors are doing.
So if people can clone my voice and you're using voice recognition to authentify me or to authenticate me, there may be an issue. And AI is now multi-modal, so not just text to text, but video, voice, and all of that. And so you always have to be thinking about the next step. Going back to the first question, as to what is, how to be ready for what's next, I think that, as we say, [SPEAKING FRENCH].
I think we're very lucky to have the discussion with Mark today. Because we can't just say, well, we'll just wait for AIDA to be enacted in a few months, we don't want to think about it. It forces us to think about a situation where AI is being used, there is some uncertainty, and we still need to work through it. And so it's actually now more than ever that we need to think about these different issues. And the discussion with Mark will help to shed a lot of light, I'm sure, on all of this.
First of all, Mark, Thank you very much for being here again at Gowling. You were here in May of last year for our financial services and technology regulatory pulse with partners AB Stephenson, Adam Gerritsen, Elena Scotchmer, and others. And the goal there was to understand what was coming.
Here, the question that is the most important for everyone is, well, what's happening now with AIDA? And now that prorogation has happened, what are exactly the next steps? Is it just gone and you have to start from the beginning again? Or, can we still think about perhaps all the work that's been done already, the guide and others, as perhaps a sign of things that may still be coming?
MARK SCHAAN: [SPEAKING FRENCH]
But I'll start in English. So a huge thanks to Gowling, again, for the invitation. I'll start with something I always say at the beginning of AI talks, but it's taken on a different meaning, which is, I've long said that for Canada, AI has been and must be a long game, which takes on new meaning now that our draft legislation has died on the order paper.
What I mean by the long game portion is really that the government of Canada's enduring interest in artificial intelligence actually goes back well before an attempt to impose a regulatory structure around its most sophisticated uses. And I realize this is mostly a legal regulatory risk conversation. But I do think it's useful to talk a little bit about the role that the government has played and will continue to play in advancing what is a truly game-changing technology.
The long game portion of that is it was 41 years ago that the Canadian Institutes for Advanced Research began its first research program in neural nets, at the encouragement of now Nobel Prize-winning computer scientist and thought leader Geoffrey Hinton. And I had a chance to spend some time with Geoffrey this morning, which was a useful reminder of this long game.
When that program was started 41 years ago, it's not nearly the lock that it is now, in that there was lots of folks within the academic community who thought that CIFAR's is bet on this notion that neural net thinking would actually amount to anything at all. And here we are, 41 years later, with a technology that's now abundant in our lives, in our society, in our economy, and for which Geoffrey now holds a Nobel Prize in physics, no less, which is always amazing that a computer scientist can win a physics award.
I think that's helpful as a reminder of the fact that we don't need to just think about this technology in its current instance and be a bit knee jerk about its response. So while we are at this particular moment where our first and initial regulatory foray on the binding side of the ledger has essentially gone away, that suddenly we need to think that there's nothing left or that we don't have an enduring trajectory on this technology that I think is going to have to continue to motivate us as we go forward.
And so I think the government's interests in AI are not fleeting. I think they are perpetual and I think will endure post March 9, post March 24, and on a go forward basis. In terms of a few thoughts or maybe a few considerations about where we actually find ourselves, I have had the joys and pleasures of being a public servant for a relatively long time, which means that I've-- this is not, to use the colloquialism, this is not my first rodeo.
So for folks who remember, this was my second attempt at modernizing private sector privacy law in Canada. C-27 had major changes to PIPEDA that had previously found themselves in C-11 in the previous parliament, which also died on the order paper. And people will remember that when it relates to copyright reform, it was the third try that was the charm, in terms of actually getting meaningful reform of our copyright legislation through.
Or, if I take a relatively formerly moribund and generally arguably in the shadows framework law like the Canadian Business Corporations Act, which saw itself unamended for almost three decades. And in the course of my time at the Innovation, and Science, and Economic Development department, I think I amended CBCA with the help of the minister and the parliament, probably something like eight or nine times for all sorts of things, whether that was environmental regulations and obligations, whether that was diversity, whether that was beneficial ownership and fraud transparency.
So that's a long way to say that it's certainly not lost. And this is not the first time in which we've had legislative efforts, particularly of a generalized and general application level, that have had to come back in future elements to figure out where things are.
The second thing that I think you already helpfully touched on, which is something I push back on, on the regular, which is there is a portion of the particularly civic activist community that will say things like, well, this is just a completely unregulated technology that's now running rampant across the entirety of our society and economy. To which, I usually try not to get shrill or visibly contort my face in a sign of displeasure. But note that that's actually fundamentally not true.
So the personal information that feeds AI systems are still regulated by the Personal Information and Protection of Electronic Documents Act, the private sector privacy law. Those entities that are pursuing AI functionalities within the economy are still subject to the rules of the Competition Act. The products that they produce potentially find themselves under either some form of federal or provincial jurisdiction, in a number of ways that are going to regulate the safety and efficacy of the product at product level.
Perhaps they're existing within a sector that already finds themselves under some sort of sectoral obligations as it relates to the safety, transparency, and functionality of the products and services that they offer, whether that's in financial services, whether that's in motor vehicles, whether that's in aviation. So this is not an unregulated technology.
That said, the rationale for which we pursued the Artificial Intelligence and Data Act is that we felt that there was a gap at level of algorithm that would benefit from specific rules specific to the models and the algorithms that are at the base of a ton of functionalities that are now existent within the commercial realm. And that rather than just relying on the product regulation or the sectoral or activity regulation, that the algorithms themselves and the models could benefit from clear rules and obligations as to what should happen to them.
That too is not lost. So the companion piece to AIDA was the voluntary code for generative AI by industry, now signed by a wonderful diversity of organizations, some at the industry association level, some at the venture stage, some at some of our largest companies, some at the multinational level. And between that effort and the effort of the Hiroshima code, which was a G7 deliverable coming out of, first, the Japanese, and then under the Italian presidency. And now we'll see in Canada's presidency, which I can come back to.
I think there are lots of opportunities for folks looking for certainty and who are looking for what compliance looks like to be able to orient themselves in the marketplace, notwithstanding the fact that there is not a binding obligation with enforcement penalties subject to what they're going to do with their algorithms.
NAIM ANTAKI: Thank you very much, Mark.
WENDY WAGNER: So, Mark, picking up on that, is there anywhere that you can go as federal government, though, that allows you to impose something that's binding at that algorithmic level? I know, I mean, Ontario's gone the Bill 194 route because they, it's something they can do because it's public sector regulation.
So clearly within their remit and ultimately will get at this issue of down to the level of the algorithm and not some of the side issues like privacy and things like that. So is there, in this time now of uncertainty, where else can you be, I guess, that will give you that ability?
MARK SCHAAN: Yeah. So I'd say maybe a couple of things. One is absent parliamentary direction, we are not in a position to be able to add a regulatory structure into some existing statute. I have a pen pal relationship with a little-known committee called the House Oversight Committee on the Scrutiny of Regulations.
One of their favorite things to do is to pore through Canada's statutes and figure out whether in fact the regulation-making authorities granted by the statute are in fact being carried out effectively by the regulations that are being passed.
And so I couldn't tomorrow just decide that I have a regulation-making authority under PIPEDA or a regulation-making authority under the Competition Act, or the Consumer Products Act, or any of these others to just say, oh, by the way, here are a whole set of regulations related to the safety and oversight of artificial intelligence.
So until parliament makes a determination that they want to go back to the level of AI regulation, either at general purpose level or at some sort of sector specific level, we're left with what we're left with. I'd offer two things, though, particularly for organizations that are in that space and seeking to actually create the conditions for certainty and for compliance.
One is they may actually find themselves in a sector, or what I like to refer to as an activity base, that is actually subject to some rules related to artificial intelligence. And the classic there is obviously the financial services sector, where the Office of the Superintendent for Financial Institutions has made it abundantly clear that the prudential risk related to artificial intelligence is sufficiently important that they will impose guidelines as it relates to transparency, explainability, and some of the actual terms that are often found within the EU AI Act and that were in AIDA, in OSPI guidance and obligations related to prudential and financial services.
So they may find themselves in that space where they actually are regulated on all of those sorts of things. But if I wasn't a bank, or a fintech, or a transportation mode, not Air Canada's use of it in biometric information systems, but actually in terms of the safety of a motor vehicle, which also will find themselves potentially governed by some sort of specific regulation.
What I would turn is to say there is absolutely nothing stopping the industrial community from coming together and thinking about a next level of granularity as it relates to the obligations that are currently party to the code of conduct for generative AI use by industry.
And I would remind folks that the birth of what is currently our private sector privacy law, the Personal Information and Protection of Electronic Documents Act, began as an industry standard. It actually began its life as industry coming together and saying, oh, there's this whole new thing called the internet, and we're going to want to transact and do commerce in this space. And that's going to require a vast amount of personal information to transfer between hands of all sorts of new players.
What's liability like in that space? What does it mean to have obligations in that space? Maybe we should wrap our own heads around that. Now, I will be completely candid that never again under my watch will we see an industry standard actually literally be superimposed directly into law. And I have good backing from that front that the courts have been clear that they don't necessarily love the way that the current schedule lives within the act.
But I do think that there is, as I say, nothing stopping industrial players from coming together to figure out what they would propose as the next level of granularity. And I think you will find both openness from government, but also openness likely from some of the largest players in the AI ecosystem who are equally keen, even in this wild world where there's some folks who are suggesting that we should actually go back to a bit of a Wild West in terms of AI regulation. I still think there is a desire for integrity, and for compliance, and for certainty that will be well met.
NAIM ANTAKI: Thank you, Mark. The last time we spoke in May, we were also trying to help, I guess, help everyone who was listening get a bit into your head as to what was next that was not yet perhaps public. And if you remember, one of the key pieces that people were waiting on, in order to help solidify their compliance process, were the regulations. And you had mentioned then that regulations sector by sector was something that you felt was important.
Because obviously, if we're talking about such a general purpose act, it can be very difficult to apply in a lot of different cases. So I guess, has your thinking evolved on this? Are there any gaps that we should be aware of or to prepare for, even if they're not yet at the stage of regulation, or law or otherwise?
MARK SCHAAN: Yeah. I think my base thinking, which is that the digital economy necessitates us to maximize the tool set that we've been afforded and to use each tool for its own valuable function, hasn't changed. Which is, I still believe that there is a fundamental role for laws of general economic application that apply on a sector agnostic basis at the level of the whole economy.
And that I still think we should make sure that our competition law, our bankruptcy laws, our incorporation laws, our intellectual property laws, and our privacy laws are playing their function, that general application, sector-agnostic kind of function can play.
And for AI, we imagined that that general application would need to be risk specific, in the sense that our high-impact AI use cases that we set out in legislation as the mechanisms that would necessitate your regulatory compliance was a useful way to think about that. And I still think that's true.
But then I also still think that what builds on top of that, which is activity-based regulation, whether in transportation, or in financial services, or in product safety, or in health, have an extraordinary lift to play in thinking about the AI applications that are obviously going to be specific to their particular vertical and their functionality and need to be built into things like product approvals or overall safety and compliance mechanisms.
I still think that there's a fundamental role for standard certifications and trusts. And I think that there's a generalized level for trade rules, and ethics, and norms to play out. And that's the schematic that we produced in the original consultation document for the modernization of PIPEDA, where we said, in the digital economy, we think these four layers all have a function, and they need to work together.
And in fact, they actually have a mutually conducive relationship to each other, in that if you do lots on the certification standards and code side, it actually might minimize the burden that you might find yourself in on an activity-based regulation side.
Because the government will say, well, you've taken care of that. You've all signed up to the certification, and you've made it the best practice, and you're all doing it. That's great. Now, we don't need to do anything other than maybe we'll tell you to do it in regulation, and we'll incorporate standards by reference. Or we'll just say this certification is doing its job.
There are a few things that I think don't fit neatly into that category that I'm still thinking about. So obviously, one is we've created an AI safety institute. The world has responded with a network of safety institutes. And I still think catastrophic human risks as a function of AI deserve some sort of special quality.
And it's not just because I spent the morning with Geoffrey, where he frightened us all in extraordinary detail about why we all should not sleep and worry deeply about the supercomputer's capacity to avoid us and avoid our oversight, but also what that means for synthetic content, what that means for cyber, what that means for bioterrorism.
Those are all extraordinary concerns that I still think need a distinct approach, including some of the national security considerations that I think will continue to need to bring to bear on both integrity of public function as well as on national security and hostile state actor usage.
The other that I think is more on the positive side is wearing my other hat, which is that my secretariat at PCO was created because there is a feeling that we have an opportunity to lead in Canada, in a number of areas of driving AI forward. And one of the tasks within that, that I'm tasked with, is thinking about, how does government actually prove out the possibilities of this technology in responsible ways, but at scale?
And so on the opportunity side, I am very much thinking about how, a, we adopt within our own large organizational structure as a federal public service, but also the role that we can play in anticipating and allowing the power of the technology to come to bear. And it ties into the third thing that is less positive. But I think about them together.
So one is anticipating the ways in which I can help. So right now, the vast majority of our regulatory structures, the vast majority of how we oversee functionality within the economy are extraordinarily analog and have not allowed for the possibility of the usage of new tools. We have a regulatory sandbox at Measurement Canada because they have anticipated that there's the possibility that one day, very soon, we may not measure things like what a liter of gas is in the same ways that we traditionally have.
And so they have a regulatory flexibility built in to allow themselves to continue to potentially evolve in how they ensure that a liter is a liter is a leader, a function that most Canadians are completely unaware of when they show up at a gas pump to know that actually there is an entire function of government that's dedicated to ensuring that you're not getting ripped off about whether or not that isn't actually a liter that's being pumped into your car. We haven't done that, nor do we have the capabilities to do that in the vast majority of our other regulatory structures.
So my colleagues at Transport Canada, tomorrow, are not allowed to receive digital twin data about the safety of aviation aircraft. They are completely rooted in a human oversight function of their regulatory responsibilities. They go and look at things and figure out whether or not they're broken, and then issue safety certification on the basis of that physical inspection.
That's true in food inspection, that's true in border agents, that's true in a whole series of those types of regulatory functions. So that's the thing I think I'm excited about that we're not necessarily there yet, but that I think about as the what next.
Then there's the scarier side of it, which is like-- my partner absolutely despises when I use this word because he doesn't think it's a real word. But agentic AI, which is just the use of AI agents engaged with other AI agents. So for those who are not in the parlance of the technologists, increasingly, these chat features are allowing me to send instructions to my AI agent, who may or may not engage with other agents of entities that are not humans that can do my bidding essentially.
My classic use case example to this is like, imagine a world in which the AI agent capabilities are actually far more mature than they are currently, although we're not that far away from that. Where I say, oh, it's time for our summer vacation. This is going to require an awful lot of work. I have no desire to do it. So I'm going to send off my AI agent to go and do my bidding for me in this particular zone.
And I say, I want to go somewhere hot, but not humid. I want to go for two weeks. We're going to go with our friends. So two of us are coming from Ottawa. Two of us are coming from Toronto. I don't want to spend anything more than $400 a night on a hotel. I ideally will stay in nothing less than four stars. I don't want to move cities more than once. But I do want a city that has these types of amenities. And I want a city that has arguably better beaches than most.
And I give it my credit card, and I send it off. And it goes, and it starts talking to the agents at Air Canada, and at Hotels.com, and Expedia, and a whole bunch of those organizations. And in one scenario, when there's a human in the loop, it comes back to me and says, Mark, you're going to Costa Rica. And like, here's what's happening. And you're going to spend this time in this city and this time in this city. Do you want to hit go? And here are all of the parameters. And here's the organizations that have been queued up for you.
In the non-human in the loop scenario, which I also think is quite possible, my AI agent just goes and does all of that for me. Now, one can imagine the dozens of potential use cases here that I don't think we're prepared for, the one where my agent thinks it's dealing with the agent of Aeroplan. But it turns out that actually the agent of Aeroplan that it's talking to is masked, and it's a bot. And it's actually either a scam artist or a hostile state actor that potentially is interested.
Now, first of all, from a privacy perspective, we have no current mechanism by which to think about those disclosures thoughtfully, in the sense that I've disclosed my personal information to my agent, my agent is providing and disclosing my personal information to an entity that's not a human, that's the agent of another organization. How do we categorize and think about those? When the breach happens, who is liable? When did the disclosure go wrong?
And then from a liability perspective, when I go back to my credit card company and say, hey, I just sent $2,000 for a hotel to a bot, can I have my $2,000 back? We are not anywhere near the answerability of the question of saying, well, I'm not liable. You're liable. You gave your personal information and your credit card information to your agent. Your agent went and talked to some rogue actor, and they gave that away. And so we're not giving you your $2,000 back.
So mildly, potentially future oriented and maybe three use cases too far from where we're at currently, but I'd say, in terms of what next, we are not far from that agentic world. As I say, my partner hates the word. You can decide whether or not you want to use it yourselves. But I think those are two ways in which I am super interested in thinking about the degree to which we can actually move forward. Because I think we're going to need to come at overall oversight of algorithms, and safety, and responsibility, and trust.
I think we're going to have to expand and modernize our overall approach to imagine the positive use cases where these can play a fundamental role in making life more efficient, and productive, and easier to comply with. And then we're going to have to imagine the super hard things that are going to come at us, that are not, that do not fit well into any box, but are going to be the lived reality of citizens and consumers all over.
WENDY WAGNER: I was smiling because I was speaking at an AI incubator event in Ottawa, just for the holidays. And as part of this, the organizers of the event actually created an AI agent to phone people, phone organizations, and tell them about the event, and then give them helpful information on parking and different things like that.
They recorded a conversation between their AI agent, who actually got another AI agent on the other line. Because someone, they deal with very high-tech people, so one of the people they've reached out to had actually had AI answering his phone calls, instead of answering his phone calls directly.
It was the most awkward conversation I've ever-- it was absolutely hilarious. We were all just rolling on the floor. And all I could think was, like, we're not quite in a Terminator scenario yet. Like, it actually, it was somewhat comforting. I was like, this is not that great. But yeah, I mean, it's here, it's definitely here already. It's not futuristic at all.
So those things are all very interesting. But to bring it back to the level of our clients and organizations we're dealing with, many of them are in regulated sectors. So they're dealing with the frameworks that you mentioned, but many are not. And we look out, and we see, now there's ISO standards for regulation of AI. There's OECD frameworks. There's just a multitude of frameworks and regulation or in guidance. And they're in a vacuum. And you must have had, you must have looked at all of these because you've just been steeped in this.
So do you have any concrete recommendations as to if you're in one of those unregulated spaces, but you're either developing some forms of AI for use in your organization or to sell, or you're just onboarding AI tools? What frameworks would you look to? What are some practical thoughts on that?
MARK SCHAAN: Yeah. I haven't yet asked a large language model to essentially summarize the co-dependencies between all of the various principles and formats, but I'm hoping my answer will be better than what it would produce, but maybe not. There are some common pieces that live across all of those.
So if you boil down the EU AI Act, the South Korean legislation that just got passed in December, the Hiroshima code of conduct that was a product of the G7, the guide on the code of conduct for industrial use of generative AI in the Canadian context, the principles under the Biden-Harris administration that they had the large language models agree to, there are commonalities across all of those principles.
And even at the standards level, and this is not a knock on the standards work. I think the standards work is super important. I would just say that at current, you can't standardize that which is not common. The best use of standards is to take existing best practice and codify it as the natural function.
We are not yet at a level of granularity at the standards level, which is why the vast majority of standards in the AI space have largely been at general application level of what is a good AI governance framework or what is a good AI overall framework for risk management. But the common principles across all of those are things that I think are relatively not straightforward, but, as I say, frequent.
So what is the level of transparency? And that comes in two ways. One is when your client, your customer, or your employee is engaging with AI models and AI-generated content, are you making them aware of the fact that they are engaging with AI-determined or AI-generated functionality?
On the other side of transparency, if you are developing inference or models, are you making it clear to folks what it is that you are actually determining as a function of the AI that you are using? In terms of explainability, do you have the capacity to be able to, in relatively general terms, let people know what decision you are making and on what grounds you are making that determination?
I think there are varying degrees of risk tolerance within various sectors. But I was heartened by at least one of our very large corporates in Canada, who said, I will not deploy anything that my mathematicians can't understand. So if it is making determinations of probability or probabilistic prediction that we don't know how the math works, that's not going in my core product with my customers.
I think there are other folks who would say, well, our level of sophistication, particularly as a medium-sized organization, is never going to have the kind of math capabilities to know that. But at least I know roughly what we are asking the model to do and roughly what it is using to be able to make that determination.
And then in terms of-- so transparency, explainability, and then I think good governance, so who gets to make a determination about the uses that are being put to the model? And what evidence are they being provided for? Have we anticipated the possible misuse or harm that may come as a function of the use of the model? And what mitigation strategies are in place?
I don't think this is sort of rocket science in some ways, but I do think that it actually requires a discipline. And then I would apply things like a materiality test. Which is, to what extent is the determination that this is going to make real or impactful to the individual or to the customer? It's heartening when you look at the AI programs at some of our most leading corporates that are at the forefront of the use of AI in their sector, there's a bunch of common things that they've done.
One, they've done a massive investment in the education of their workforce. So most of them started with their C-suite. And they went and got some sort of executive education that actually said, all of you need to understand how AI is functioning and how AI is playing out.
Most of them have found that insufficient because it turns out that the business process owners who are actually going to deploy the technology are more important to the determinations of AI functionality within the organization than the C-suite. So then they had to go and take them through the course.
The second is they put in place some sort of governance that actually includes a materiality test to say, what will this actually make a determination of? And who loses if it's wrong? And I think in some cases, the materiality test worked out basically to say we will probably be the loser if this is wrong. And so therefore, materiality is low.
If this is going to make a determination that this expense is actually covered, rather than not covered in the vast majority of instances, and it's relatively low cost to us as an organization, that might be worth it. But if the determination is actually that our customer pays because they're actually going to get denied service, and we don't necessarily have, and that's material, well, we should probably think about whether or not we have a good enough understanding of what it is that we're deploying.
So I think there's enough commonality. I think the issues that people have with the very specific regulatory structures or some of those principal documents vary. There are lots of critiques of the EU AI Act about why people don't like particular aspects of it. But generally, I think if you've thought about transparency, if you've thought about explainability, if you've thought about good governance and materiality of harm, I think you're in good shape.
And I think that should guide your determination. And I wouldn't sit back and say, I'll just wait until there's binding regulation with clarity about every single kind of dotted I and crossed T. Because, to be honest, I think that posture will leave you behind with respect to your competitors. And I think it will also deny you the huge advantages of a technology that is advancing extraordinarily quickly.
NAIM ANTAKI: Thank you. Off the cuff, as you know me, what about optionality? I've seen in some case law or some guidance, some comments about, well, if you are forcing all of your customers to use AI versus if you give them a different path. Is this something that you feel is important from a regulatory standpoint, or that should remain the choice of the company itself?
MARK SCHAAN: I think a lot would depend on the rationale for why you're pursuing the optionality. So as a liability-lessening mechanism, I'd say you're probably going to find yourself in mixed company when courts adjudicate your rationale for that, to say I shed my liability because I provided optionality to my consumer as to whether or not they wanted the AI path or whether they wanted the non-AI path. And because they clicked yes to using the AI tool and functionality of my organization, I'm no longer liable because they chose it.
We can look to privacy. We can look to any reasonableness test that has been established through jurisprudence to date, where I think most people have suggested that simply putting the onus on the consumer without appropriate education or understanding is insufficient in terms of mitigating your own obligations.
That said, I think consumer choice in using AI-related tools has been long a feature. I mean, this is the new cookie landscape we find ourselves in every single day, where depending on which websites you go to, you're offered extraordinarily granularity about what things you're willing to provide and what things you're not willing to provide, or whether or not you just said accept all.
I think most of us are probably in the either all no, all yes. I don't know. There's probably lawyers in the room who love spending 10 minutes going through the functionality of the menu and choosing exactly which cookies they're willing to accept. I'm not there. But I do think it's interesting to give people an option to participate.
And I also think there is differentiation in the marketplace, particularly as it relates to the responsible use of AI and the fact that people are actually considering themselves rewarded for their participation in the development of stronger and better models. There's good literature on the degree to which people are willing and consensual to participating in things when they're made aware of their opportunity to be able to make things better.
I think that's the premise of lots of health-related data consent functionality is to say, well, if you knew that your data might cure cancer, would you be willing to share it with a whole bunch of people who normally wouldn't have access to it? And there's varying literature that says, oh, lots of people will say yes, and lots of people will say absolutely not.
So I think it would, I think it's like, what's the intent behind the optionality and where does it fit within your overall approach to consumer satisfaction that I think would be more compelling than just it's good or it's bad. Because I think, as I say, if you're trying to do it to limit liability, I look forward to reading the jurisprudence because I will want to know how you did.
WENDY WAGNER: OK, yes, I'm a cookie banner connoisseur.
[LAUGHTER]
What else is out there? What's really neat? Anyway, so that's true. So I do want to go back to that question that I prefaced when I was introducing you. Which is, so we're in now this regulatory void, and it will take us a long time to get out of it, I would imagine.
And why don't we just say that EU AI Act applies in Canada? I get this question from clients all the time, with GDPR. If someone just said, OK, well, let's just, Canada's adopting GDPR, and then I'd have one. And then I could just have my GDPR policy, and I wouldn't need to have my PIPEDA policy, and my CCPA policy.
And I see the same with AI. Why can't we-- I mean, I did look at the EU AI Act. There are differences in what's considered to be a high risk. They've got critical infrastructure. They've got some things that we don't have, it is different. But why is it necessary to have? Is it because it works in particular ways with our legislative framework? Or, what is the reason for that?
MARK SCHAAN: Yeah. Maybe a few things that I'd offer. One on the privacy side on GDPR, my answer is easier, in some ways, which is to say we already had adequacy, which allows the vast majority of private sector entities who are, by and large, operating within either a Canada-specific framework or a Canada and EU-specific framework to essentially conduct themselves with full treatment under our law, without having to factor in how they will comply into GDPR because they comply with PIPEDA.
We also, in the GDPR case, saw things that we felt we could and wanted to avoid, which was, in the GDPR case, the compliance burden is extraordinarily high and is best met. I actually think it's anathema to at least one of what I believe to be an unsaid intent of the GDPR. The public intent of the GDPR is to protect the privacy of Europeans to a high degree of standard because there is a compelling belief that privacy is vital and expected from European citizens.
I think the unsaid portion is in a geopolitical world in which the dominant tech platforms are all non-European and are American, is there mechanisms we can put in place to be able to try and level the competitive economic playing field to have European players emerge, who may become dominant in their ability to be able to meet our obligations in ways that the American dominant players will have no interest in potentially meeting?
I think the outcomes have actually been anathema to that, which is that the people most able to comply with GDPR are in fact the existing dominant tech platforms who can devote teams and teams and teams of regulatory specialists and lawyers to actually ensuring that they're fully compliant with the act and organize their structures and operations in such a way as either to minimize the impact of their GDPR obligations in a European context, while maximizing their freedom in non-European jurisdictions.
So there was definitely a piece for us on the PIPEDA side that said, if this is about preserving SMEs and ensuring there's actually ease of compliance to allow for what is the heart of our Canadian economy to continue to compete, well, adopting this holus bolus is not going to be helpful. Because actually, the proof is that they're not super well equipped to be able to meet the regulatory burden that's coming with GDPR.
The second is there was things that I just think they got wrong. So I think some of their consent provisions weren't awesome. I think their data portability provisions created the exact nightmare that I hoped we would avoid in PIPEDA modernization, which is that they told everyone that they had a data portability right. They told everyone that they were freely allowed to receive the personal information that they had provided to a private sector entity and transport it to someone else.
Well, first of all, literally the day after the GDPR passed, people showed up at their bank and said, I'm supposed to have a right to have my access to my data. And the bank said, well, sorry, I don't know how to help you. And then we created a massive cyber risk because allowed the actual repository recipient of the data to be the human itself, the customer, who are the worst possible people to give an XLS or a CSV file, to say, here's all your financial transformation, transaction information. Go down the street to the bank and they'll like take your USB key and plug it into their system and.
Anyway, so our data portability reg was premised on the notion of a customer's application to say, I want to move my data from one service provider to another. The service providers themselves did it. Anyway, on AI, I think similar functions, so one is we think the compliance burden of the EU AI Act is high. We think the regulatory uncertainty under the EU AI Act is high.
The Europeans, and again, I've said this with Europeans in the room. So I think that if I was already not on their holiday card list, so I'll maintain my status there. Which is to say, they have found the wonderful sweet spot that is both highly prescriptive, but yet highly vague in terms of compliance mechanisms, in the sense that the obligations are very clear, but the actual mechanism by which you live them out is completely opaque.
And so I think we wanted none of that as it related to our approach to AI, which is why we thought that we potentially were better off pursuing our mechanism, leaving a lot into the regulatory structure, allowing for time to be able to build compliance capacity.
The last thing I'll just say is on the AI act, in general, was the origin story of AIDA, which, sorry, for those of you who heard me tell this before, was like, it was not our intent. We didn't walk into the room and say what we really need is a general purpose AI law. What happened was Minister champagne arrived. He had the warm corpse of C11 on his desk. And we said, do you want to revive it?
And the general zone was yes. But then he started to ask all of these questions about-- and this is the fun of policy making and the lived political environment. He'd had a conversation with the Facebook whistleblower where he got very worried about algorithmic targeting and particularly the targeting of children. And so he kept coming back to all of these use cases. And we kept saying, well, sir, I hear you, but the nexus to this law is personal information.
And so this generalized algorithm that's feeding off of an imprint of 14-year-old girls in their generality and telling them that their bodies are awful and that they might want to consider harming themselves, you're not going to be able to ban that in this functionality because it has to be specifically that 14-year-old girl and her personal information that's been utilized to make this determination that you're then allowed to say that's an improper use.
And so we arrived at AIDA because of the limitations of personal information protection as the mechanism of action to get at algorithmic harm. And so the EU AI Act wasn't sufficient, in some cases, to what we were looking to do. And we thought, maybe cockily or maybe with too much hubris, that we might be able to do it better and that we might be able to.
And we understand the compliance burden, but that's why these interoperability provisions that we've been able to do in a whole series of sectors, whether on joint manufacturing approvals for safety and life sciences, which is a blanket approval we have between us and the Europeans for any facility that's approved for pharmaceutical manufacturing in this country is automatically approved in the EU, same thing on the other side. Adequacy is another mechanism. We think we have other mechanisms by which we can preserve some of our policy sovereignty, while still getting it interoperability. But it's a hard row to hoe.
NAIM ANTAKI: At the last competition summit, there was a very interesting panel that included not just people from the competition bureau, but also, as you remember, people from the copyright office, et cetera, et cetera. Can you tell us a little bit more about perhaps the Interaction or proposed interaction between different agencies, or departments, or parts of the government?
MARK SCHAAN: Yeah. I'll answer it in two parts. So on the interaction between the laws themselves, particularly let's-- well, I'll talk about it in two buckets. One is general application to general application, so a general application law and another general application law. How do they work together? And who gets to decide who's on first and where things fit?
So two things that I think we've done or tried to do, one is we've allowed them for sharing of information and continued engagement with each other. So the commissioner of competition is allowed to share information and talk to the commissioner of the CRTC. The commissioner under, the AI and data commissioner was, when originally proposed in AIDA, had explicit, not just capacity, but obligation to engage with other commissioners who potentially might be relevant to their work.
And so on general application to general application, I think we've largely tried to solve for that by essentially allowing them to talk to one another. And I think in the internal mechanics, part of the function of my world as the devsec for AI is to think about the plumbing for that on a more generalized basis of how we bring intersecting aspects of AI together from a policy perspective.
So how does the geospatial uses of AI for atmospheric data at ECCC or at Environment and Climate Change Canada link up with the Earth observation and satellite imagery data that potentially is at our friends at Natural Resources Canada, who are thinking about it from an emergency management perspective? And can we have those capabilities talk to each other? And I think we're trying to do that on a regulatory and on a more broad-based policy sector.
The other good feature on the general obligation to general obligation is that they've come to their own determination that they should all hang out. So they have a technology cluster that includes the CRTC, the competition commissioner, the head of the copyright board of Canada, and the privacy commissioner, who all now, I think, four times a year, pal together and talk about common issues.
On general application to sectoral application or activities-based regulation, I think that is still the-- I'm old school in that I continue to believe in paramountcy for the general application statute. But that's just because it's where I grew up. So I think about that in FPT, in Federal, Provincial, Territorial world, where I'm always like, federal paramountcy is always the case.
And I continue to believe that in general application laws, it should be the AI law or the competition law that we think about first, and then the transport law or the activity-based specific. The financial regulators should follow suit based on what we're doing. I think there's some healthy tension there that will continue to sort out. So yeah, I think in general purposes, I think there's room.
I think the reality is people, particularly in AIDA, talked about the fact that they were going to become doubly regulated. And they talked about that as if this was novel or as if this was like a first instance where someone might be doubly regulated. And I think I'm willing to push back pretty hard on the notion that there is very little in this world that remains singularly regulated.
Even in financial services, the holy grail of the Department of Finance's independent authority for prudential risk to manage themselves, PIPEDA still has paramountcy, the Bankruptcy and Insolvency Act, less so because they've got Aurora. So I think in general terms, we're still working through some of those mechanics.
But I think we need to think about, increasingly, how just like we want one application, one permit, I think we need to think about one technology application and a joined-up kind of approach that eases compliance, but doesn't negate the function of the fact that this has privacy concerns, this has competition concerns, this has investment review concerns, which is natural.
Because the floors, lighting, heating, cooling, and functionality of the space that we are in currently are probably subject to, I don't know, at minimum, probably five pieces of statute or standard that are currently regulated. No one is saying, well, this room should only ever be, the lighting guys get to determine everything. The standard for safety of consumer products should be all of the functionality that is required. And you're like, well, no, I think the engineers probably have a thing to say about whether or not we feel safe on this floor.
WENDY WAGNER: So I think we should turn it to questions now. And maybe what I'll do is-- and if we don't have any, then we will have no shortage of things to talk about. But there may be questions from the audience. So I can roam around, just so that people's questions are audible to people online as well. So if you have a question, just put up your hand, and I'll make my way. Hopefully I won't fall down the stage.
AUDIENCE: This question is actually for our hosts. How has the legal community positioned itself to provide legal advice using AI?
WENDY WAGNER: We haven't really. No, we're trying our best. I mean, I think names, explanation of the services we're providing was useful in that respect because it is, in a law firm, very multifaceted. So the hardest thing to get your head around is what everyone in the firm is doing. And depending on our practice areas, almost all of us now touch AI. We have a tech sector in the firm. Everything is tech sector now. There's nothing in the firm that doesn't have a tech aspect to it.
And I think AI is very much like that as well, everything is AI. So our job is to make sure that we're covering off every legal aspect of AI regulation when a client comes to us and we're not missing anything, and we're making sure that IP is considered, that data is considered, that contractual, just general contractual considerations are on the list, that we're not missing anything sectoral.
AUDIENCE: Question, generating advice using AI.
NAIM ANTAKI: Yeah, I can perhaps.
WENDY WAGNER: Go ahead, Naim. We're actually going to have a whole session on that. It's probably going to be number three. And we now have a director of AI for use within law firm practice at our firm. His name is Al Hounsell, and he'll be speaking at that. So I think that'll be on the agenda. But, Naim.
NAIM ANTAKI: Yeah. We had to walk the walk ourselves because we are using, in certain cases, AI tools at the firm. However, we had to think about it in a very structured way, the same way that you all have to think about it. And so we had a multidisciplinary group, not just lawyers, but a lot of different people, trying to think about, well, first, are we using AI without knowing?
And second, do we do like some clients that we've heard about and say we are not allowing AI use at all? That's not where we landed, but we landed in a very structured way. The first thing is cybersecurity. You can't just use whatever ChatGPT wrapper from a God-forsaken country with client information. It's always a very structured way of which items are approved. Then, it's piloted by people who are knowledgeable about whether the answer is right or not and what type of information you are inputting.
And then similarly to what we've seen with clients, there's always a discussion. And a lot of clients have not gone as far as saying, we'll just put the AI front and center with the customer. There should always be a barrier at some point where you say, OK, well, I've looked at this, perhaps it's really great. Or perhaps, as Yoshua Bengio likes to say, the answer is confidently wrong. And you need to know, you need to be knowledgeable enough in order to know this.
But I think that it's putting our heads in the sand to just say, well, we're just not going to use it. We're going to wait for everybody else to use it. You need to start, it's a journey. And the different standards, Wendy has been working on that, and Antoine, and the rest of the great privacy team. You don't start from zero. Data quality is something that you have to think about, no matter what.
Change management, it's not the first time, and it's not the last time that we're going to have to deal with change management. And you just need to think about, OK, well, how do we make this work? And how do we mitigate the risks? And where are we comfortable being wrong? And most importantly, does this align with our values as a firm? Just like it would align with your values as an organization.
So working with clients, boards, etc, that is a very important question and something that really permeates the decision. But it's always a decision that is at a point in time. I think the big mistake is to say, well, I've looked at AI. I looked at it last year, I'm done. You have to think about it every, unfortunately, every six months perhaps, not just every year or every two years because things are advancing so rapidly.
So you have to just jump in the water. And if you're not the only one, hopefully someone helps you not to sink. But you need to get in there. You have to do it. And we have some of our great folks that can help on some aspects of that for sure.
WENDY WAGNER: Yeah. Many of our clients outside counsel guidelines now require prior approval of AI as well. So definitely, figuring out what you're actually using is a must. Other questions?
AUDIENCE: Thank you very much. First of all, what a great session. Mark, you are interesting, and engaging, and funny. Thank you so much for being here to talk to us.
WENDY WAGNER: Wonderful.
MARK SCHAAN: My pleasure.
AUDIENCE: I'm going to ask a little bit of a winding question. So forgive me, I think context is needed. I am a lawyer, but most of my work has been in-house for financial services, where I was chief compliance officer, chief anti-money laundering officer. I now do consulting. But in that capacity, I've actually implemented and used a number of AI systems.
And we have big data. The number of transactions I sort through, I love them. And the first one I ever implemented is so old, it was forward chaining. So this is how far back it goes. And I'm just going to give you a couple of examples to tell you what I've been noodling over lately, and I really would love your thoughts, Mark.
The first system I ever worked with, that I installed, was a transaction monitoring system for fraud. And it was great. And one of the things we learned from it-- and when I say we, I don't mean the company, I mean fraud examiners-- is when people make up numbers, they tend to end in a 7, an 8, or a 9. And it's just a factor of human psychology that we think $11,001 looks suspicious, but 11,289 does not. And that's a really useful data point, for example, for the CRA.
Now, about four years ago, which I realize is a millennia in AI, we tested an AI-enabled trading algorithm. And any large data set will have true patterns and false patterns. And one of the things that kicked out was every third Tuesday, the market crashes. So we didn't actually use it, we ran it alongside. Third Tuesday comes up, and it tells us to sell the bottom 15 performing stocks. Imagine 20 algos doing that. We could tank the market. So there are a number of issues.
And when I think about what I'm implementing or helping to implement today, I really worry about human rights. I never hear that in the laundry list. And for you, it's constitutional. So let's think about it, say CRA is using a program. It says the last three digits of your postal code. You're more likely to be cheating on taxes or God forbid, your last name ends in an I. And I say that as a woman married to a great guy who will find that funny.
So that would be a constitutional violation that we really need to think about when we're enabling it. And I don't hear that in the dialogue. And I'm wondering how that's going to come out in your thought process, especially in enabling public sector. But the five banks, if we start enabling this stuff and it makes these decisions, I mean, they're in everybody's home too. So I would just love to hear your thoughts on that particular black box issue. Thank you.
MARK SCHAAN: Thanks so much for the question. And I think, well, a few things I think of as responses. So one. I think we absolutely are thinking about a need to continue to think about the collective risk of the deployment of artificial intelligence as well as the instance risk of artificial intelligence. And what I mean by that is exactly to your example about the possibility for wide scale utilization of similar models that come to similar determinations that ultimately, particularly if wrong, have the capacity to be able to have real implications for our well-being.
And so this is where I think the algorithmic laws that may or may not get put in place or the obligations are insufficient in and of themselves for the risks of artificial intelligence, which is why the OSPI standards, the work by the Competition Bureau about algorithmic collusion, even unwitting algorithmic collusion, are super important considerations.
Because that kind of algorithmic collusion that you're contemplating in a financial transaction, market selling perspective have a materiality that's critical, but so does, for instance, the capacity for price algorithms to ultimately arrive in a collusive behavior. Where, suddenly, all of us are actually paying more for things, because my algorithm has determined that the ultimate, most expedient factor for maximization of profit is the same one as yours.
And essentially, it's like that agentic world that I was talking about before. But instead, we've got algorithms that are basically acting as agents who are working with each other to be able to ultimately harm consumers. And so I think it's an all-in approach.
On the human rights and consumer, this is where I think there's needed effort in a couple of spaces. One is, I think the reality of why we chose the high-impact AI systems for AIDA that we chose is because much of those are actually related to some of those same considerations, which is employment decisions, coverage decisions, health decisions, consumer preference and recommendations decisions have a capacity for bias and discrimination that is extraordinary and that needs to be understood at outset, to the degree to which at scale, that potentially creates massive amounts of human harm.
And it also relates to some of the determinations I think they need to get made about go and no go zones, particularly in AI in its infancy. And so I was just, prior to this, in a conversation with a financial services firm that's on the more fintech-ey side of the ledger. It was talking about an obvious use case of no go that I think we have to think about in terms of its relationship to what is not an obvious no go zone.
So the obvious use case is law enforcement data is, in many cases, by most people, believed to be a pretty standard no go zone. If I use crime data as the determination of the deployment of enforcement human resources, it turns out I will replicate exactly the same patterns of potential discrimination that are already existing. Which is to say, if my data shows that if I go to particular parts of this city, I'm likely to find crime. So therefore, I will send more officers to those places because every time I send someone there, we find crime.
But I think we can all agree that, well, that's problematic for all sorts of reasons. Because, a, you're already making determinations about the postal codes you're not going to, which turn out to actually potentially have just as much crime, but there's all sorts of reasons why you're not finding it when you go there. So if the system is actually predicated towards doing that, in many cases, people's minds are saying that's a poor use of AI, and it's probably a no go zone.
In financial services, interestingly, we have analogs to that. Which is, we currently, given the availability of data, make all sorts of credit availability and financial determination on the basis of very similar functionality. We use proxies that are, by their very nature, data sources that are potentially insufficient for understanding the true degree of risk.
So we use household income or salary data as a main marker for credit worthiness. When in fact, actually, unsalaried workers potentially actually have lower degrees of precarity in a number of zones, but aren't necessarily at a mature enough state of data to factor into a higher credit score. And so they become either underserved or excluded from access to particular types of financial services.
And I think that's where some of the AI actually can get us better than the human results, in some of these spaces, to actually not just address the bias and discrimination, but actually potentially be a positive force for eroding the analog bias and discrimination that's actually replete within a number of systems that are actually currently deployed. So I certainly think about the human rights piece. It was the basis for much of AIDA.
Interestingly, in Geoff's talk this morning, he thinks it's the most fixable of the AI problems. I'm buoyed by that, but also, yeah, mildly scared sometimes.
AUDIENCE: I asked for the mic back to ask you about iterative feedback because we all think about the first data, but we don't think about the second. But you just said exactly that when you raise the point of you send the cops, you find something.
There's a startup in California, I just saw a demo. They do a client complaint handling tool, an AI tool. And it actually, in the demo, told us that we should settle with higher net worth people because they're more likely to go to a law firm like Gowlings and spend two years fighting with us and will spend more money. So I took it back and I'm sorry because the point already came up. Thank you, Mark. Fabulous. Again, really appreciate it.
MARK SCHAAN: Thanks much.
WENDY WAGNER: I think you're going to wrap it up.
NAIM ANTAKI: Thank you again so much, Mark, first and foremost, for being with us and, again, being so generous with your thoughts to help us look into the future and to think about what's next. Wendy, thank you also. You've been really an amazing person as always and a driving force for this. And we have a number of other people at the firm who are also eager to talk to you about some of these issues.
We're here, we are here together. And I've always said, whether I'm speaking at places that are only with engineers, or only with lawyers, or only with business decision-makers, you can't just talk to the same people. It's boring, and you don't get the right questions. We need to inform ourselves and each other. And we also need to help the government to look into maybe some issues that they may not yet be ready for or they may not know about yet.
And this is why sometimes people say, well, why was Canada not the first with federal AI legislation when they were the first in the world with a national policy? Well, a number of reasons, but one of them was consultation. And so things change very quickly. Little anecdote, the EU AI Act that Wendy was talking about was ready to be passed, and then ChatGPT happened. And they had to go back to the drawing board. It delayed the passing of the EU AI Act by a number of months.
So this change management, this looking again at the state of the law, the state of the technology, the state of business decisions, is something that we just have to do on a regular basis. And it's by doing that we'll all be better at it. We look forward to perhaps being with us at our next session. And we have, of course, a lot of great articles and guides, just like the AI guide that Wendy mentioned. And we're ready to help you if you wish. Thank you again for being here today.
[AUDIO LOGO]
Gowling WLG’s AI on the horizon event series is a four-part exploration of the evolving artificial intelligence (AI) landscape.
The first session, focusing on the regulatory framework shaping the future of AI.
Our panel of lawyers, joined by guest speaker Mark Schaan, Deputy Secretary to the Cabinet - Artificial Intelligence, will discuss:
Discover how to help your organization mitigate risks, adapt to new regulations and succeed in a dynamic, tech-driven environment.
This program is eligible for up to 1.5 hour of substantive CPD credits with the LSO, the LSBC and the Barreau du Québec.
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.