Al Hounsell
National Director of AI, Innovation & Knowledge
On-demand webinar
CPD/CLE:
CHRISTOPHER ALAM: Well, welcome everybody. Welcome to those of you who are with us today at Gowlings in Toronto. Hope you're enjoying your lunch. And welcome to those of you who are joining Gowlings online. I hope you're also enjoying your lunch. My name is Christopher Alam. I'm a lawyer practicing in lending in Toronto, and I'm also the co-lead of our artificial intelligence subgroup nationally, along with my co-lead out of Montreal, Naim Antaki, who I'm sure is enjoying his lunch online.
So, welcome to our panel discussion, AI and Legal Departments Unlocking Innovation and Efficiency. This is a panel discussion that is one in a year-long series that we're putting on artificial intelligence topics. So, watch out for those. We've got a long series set out topics such as privacy, governance. They'll be rolling out throughout the year. And we hope that you'll enjoy all of these. I noticed that our full title is AI on the horizon. And I'm not actually sure that it's on the horizon. I think the horizon might be here right now.
I was noticing as I came in this morning, that there's a lot of things in the news that touch on AI. And this very morning, the White House put out a public comment on inviting the public to discuss its artificial intelligence action plan. If you don't know, there's a new artificial intelligence executive order that rolled out a couple of weeks ago. And in the first Trump administration, that was something at the forefront of the president's agenda.
And you may not be surprised to hear that this order is directed the development of an AI action plan to sustain and enhance America's global AI dominance. And part of that plan is to prevent unnecessarily burdensome requirements from hindering private sector innovation. And the Deputy Director of the Office of Science and Technology is directed to reduce those burdens. And so, we expect to see a lot of developments in that area. And if you have comments about that, feel free to write to the White House no later than March 15.
Interestingly, of course, our regulation is a little bit diminished with prorogation. So, perhaps, with nothing going on here, we may be able to compete for some of that dominance. The other thing I saw today that was kind of interesting was you may not know this, but AI chatbots can now talk to each other. For those of you who are of a certain age in the room, you may remember your dial-up modems. And what those dial-up modems sounded like? Well, that's what it sounds like when AI chatbots talk to each other. I saw that this morning.
It's a little bit frightening because there's actually a scene in Terminator 2 where they do exactly that. But then the thing that really caught my attention this morning as I was coming up on the elevator, is that there's a new album, new music album out. More than 1,000 musicians, munitions-- when I'm thinking of Terminator. But more than 1,000 musicians, including Kate Bush, Damon Albarn, and Annie Lennox have released a silent album in protest of the UK's government plans to let artificial intelligence companies use copyright-protected work without permission. So they're quite worried about that.
And I thought, well, that's pretty interesting. I wonder what would happen if I tried to get AI to write a silent album. So, I said to Grok, would you please write a silent album, which protests the takeover of the music industry by AI? And Grok did. I won't read you the lyrics because they're terrible. But it did come up with the framework from an album. And the album is called by Grok. No machine can sing my soul.
And the track listing is going to be static over wires, human noise, algorithms grave the last real verse, and mute rebellion. And I took from that Grok is either a rush or a Megadeth fan. But it explained to me that these lyrics are to be heard in the mind because it's a silent album. I thought that was clever. As a poetic middle finger to AI's encroachment on creativity. So, it's pretty good. And I said, well, isn't that-- Grok, isn't that kind of a thing that your AI and your writing a thing about protesting AI?
And he said, fair point. I see the irony. Me and AI writing lyrics to protest AI taking over music. But I'm not here to replace the human spark. I'm just a tool tossing you some raw material. So, in the spirit of thinking of AI as tools, we have here a terrific panel to talk to you about how you could use these tools in your legal departments. We're going to do about nine topics. For those of you in the room, feel free to put your hands up if you have questions as we are ongoing.
We will have a good chunk of time at the end for questions. And for those of you online, please feel free in the chat or other functions to enter your questions. And our panelists will receive those and can address them at the end. So, our panelists from Gowlings, Al Hounsell. He's our moderator as well as a panelist. He's our national director of AI innovation and knowledge with over a decade of experience in legal service innovation and helps teams, including ours, leverage AI, automation, and strategic technology to enhance efficiency and client outcomes.
Before joining Gowlings, Al led a legal operations team at a major international law firm advising corporate legal departments on contract life cycle management, workflow automation, and legal tech strategy. And he directs multidisciplinary teams to streamline operations through AI and data-driven solutions. He's an adjunct professor at Osgoode Hall Law School, where he teaches legal innovation, AI, and technology and won the 2024 Innovative Leader of the Year.
Jonathan Leibtag is joining us from Microsoft. He's Senior Corporate Counsel at Microsoft, and is the Americas legal lead for Microsoft software and digital platforms business, an organization supporting Microsoft's innovative startup, enterprise software, and digital platform customers. Jonathan was a member of Microsoft's strategic pursuits team, a global legal team dedicated to the company's most strategic and novel transactions across its commercial business.
In addition to transactional work, Jonathan is deeply engaged with engineering teams and business leaders across the company tasked with driving its AI ambitions, including being Microsoft's primary legal support for the frontier model forum, an industry body developed in collaboration with Google, Anthropic, and OpenAI to promote safe and responsible development of frontier AI models. Before joining Microsoft, Jonathan was an MA lawyer in Toronto.
Keri Wallace joins us from Uber Freight. Keri Wallace is a highly accomplished legal directorate at Uber Freight, where she has significantly contributed to product development, launch and life cycle management. Keri has led legal efforts for key partnerships, and alliances, and managed legal matters in Canada. She provides strategic advice on marketing initiatives, and has been at the forefront of integrating various emerging technologies in the legal department.
She's been actively testing legal gen AI tools within Uber Freight, and has spent significant time collaborating with the engineering team to build off existing AI tools. And she would like you to know that credit for that bio goes to gen AI. Now, I will turn it over to you.
CHRISTOPHER ALAM: All right. Thanks so much for that introduction, Chris. Thanks, everyone for coming. It's so great to have you here. And I'm super excited about our panelists today. I have known each of them for a couple of years. A couple of years ago, I read an article. I forget what publication it was that Jonathan had contributed to or wrote. And I thought, I've got to reach out to this guy. He's got some really interesting views. And I was on a panel with Keri a year ago, I think, where she had some amazing views as well from the perspective of a company like Uber that's doing a lot of interesting things with AI.
And what we're going to discuss today, we're going to look at the journey of different legal departments, what that has looked like, and some of the things that they've learned along the way, what are some of the use cases right now that are delivering value, what are the differences between legal specific AI tools and the general AI tools, and how does that relate to in-house legal work? What are some of the challenges that we're facing? What are some of the low-hanging fruit that we can work on now, compliance issues, and how do we communicate the value of AI to executives as leaders in our legal departments?
So, I'm super excited to hear what each of them has to say. And maybe I'll pass it to Jonathan. If you want to give us a bit of an overview of why it's so crucial for legal leaders in corporations to have some AI literacy.
JONATHAN LEIBTAG: Yeah, sure. And thank you so much for having me. And it's always great to join these types of panels, especially with folks like yourself. I'm always able to learn so much from you. So, I'm thrilled to be here. It's important to understand the potential of AI. Because frankly, it's not going anywhere. It's not a blip in time. This is something that's certainly going to be here. It's going to continue to evolve. And as powerful as it is today, a great line that I continue to hear is the worst it's ever going to be today, because it's improving so much and evolving in such amazing ways.
But it comes down to the lens that we have to have when we view AI, as something that can optimize, and augment, and bring out the best of what it is that we do. And if we look at it that way, I think we'll be more curious to explore AI in meaningful ways. And if we can develop that culture within an organization, we're going to see how valuable AI could be. But it starts with having that baseline notion of, OK, I need to learn about this stuff. And that's what AI literacy is this wanting to learn and just understand the basics in order to explore the potential.
AL HOUNSELL: That's really well said. And I guess before we jump into our first topic, Keri, any high level thoughts? What should legal leaders be thinking of in order to frame our discussion?
KERI WALLACE: I think we were just-- and I echo everything Jonathan says. I'm happy to be on the panel and happy to be on a panel with you two as well. But I think we were just talking about how every company out there right now has tasked their leaders to find a way to incorporate AI. People are really receptive to it. Everyone's looking to add this to their team. And just like how in other departments, they might be able to achieve SLAs and give really good feedback.
Like ops can say, hey, we use this AI to answer a phone call. And wait time went from two minutes to one minute. And we don't really have that as in-house counsel. There are SLAs that can shorten. Of course, there are low-hanging fruit. There's NDAs you can get a quicker turnaround. But for the most part, we shouldn't get distracted by our ability or inability to do that. Because there's a lot of ways that it can augment and just make your life easier and make your life faster.
It's not necessarily going to be that you turn to your leadership team and say, I saved x amount of time. And these are the measure of all things. It's really just making your life more efficient, making your team's life more efficient.
AL HOUNSELL: That's great. So, why don't we go back to the start of how each of your legal teams began experimenting and developing some of these learnings around AI and generative AI? The legal teams represented here, as well as those that have joined us online, might be at varying stages of understanding how to really make use of AI and generative AI within their teams. So, Jonathan, if you want to kick us off. You were customer zero for Copilot. And there's a lot of learnings that you got from within the context of Microsoft of how to use some of these tools for legal. If you want to describe your journey.
JONATHAN LEIBTAG: Absolutely. So, it's very interesting in terms of being at Microsoft, because we don't only see AI as the next significant evolution in technology. But if you're reading the news, you're really is seeing AI as the driver of our success now and into the future. We're all in as a company. And if we're going to be evangelizing and socializing, the benefits better, whether it be societal or enterprise benefits, et cetera of AI to the marketplace, well, we better be power users ourselves.
So, we were certainly customer zero. And we were really being pushed and even incentivized to start leveraging AI. And the benefits are being communicated to us as to what it can do for us. And in terms of the journey, I find it interesting because as much as we're evangelizing AI, we're getting excited about it. When the spotlight turned to us to start using it, we had the same excitement. But that same skepticism that you're seeing in the marketplace started to dawn on us as well.
Those same fears as OK, well, you're telling me about the potential of AI to transform the legal work stream. So, how does that impact me and my job? It's the same questions we see at a societal basis, despite being the biggest evangelizer of the technology today. And that mindset quickly shifted as we developed a culture and understood the potential, how it can not replace us, but optimize us in many different ways. But in terms of the journey, it started off with that.
And it's interesting as to why. Because as in-house counsel, when we were talking about AI and the potential, they said, well, it can take a lot of the low impact, day-to-day, redundant, mundane work off your desk. And if you're a transactional in nature, you say, OK, that's great, because when I'm working on transactions, I'd rather focus on that. But you know what? A lot of my work and time, especially when things slow down, is regarding these day-to-day things.
That's how I keep busy. So, if I'm no longer busy doing these things, what does that say about my job? So, it was just very interesting in terms of even though we were again, celebrating it, there was that initial skepticism. But I'm going to talk a lot about this today in terms of how that shifted. And it started with culture, and a better understanding, and being more curious than skeptical. But I would say, interestingly, again, just to double down on the point, the journey started in the same way that I imagine folks at different companies, law firms, et cetera are viewing AI.
And I think it's similar throughout history. Just with any innovative technology, I think the start of the initial reaction is more fear and skepticism. That evolves as it has at Microsoft and with me to more excitement.
AL HOUNSELL: That's great. Keri, how about your journey through learning about AI and incorporating it into legal workflows?
JONATHAN LEIBTAG: So my story is a bit different. So, I'm at Uber Freight, which is the product I'm sure you're all familiar with and use every day. So we're not rides or eats. We're the B2B. We do just full ship freight shipments and logistics. And so, we're in a unique position because we are able to run our own business and start things up. But we also leverage tools that Uber already has. And so we get to use those tools for free. But then we've also been looking to see how we can use tools specific to Uber Freight.
So that's stuff like the contract management tools, the CLMs. And basically, because they're all so new, they're all giving two three. You can get it up to four-week trials. If you say, I just need a bit more time to play around with it. Everyone's looking for their flagship customers now. And so, we've basically been trying every tool under the sun for the past six months to a year now. And just comparing them all. And what we found is that some are better than others for our work. So, that's not to say that if I have an opinion on a tool, that it's not going to work for your business, just the way how we operate. We found some tools to be a lot more effective than others, specifically in the legal AI tool space.
The other thing that I'd mention is that there's really three buckets of AI tools out there today. So, I would say there's the legal AI tools that Harvey has got to be the most hyped one I've ever seen. They've got a great marketing team. So you've, got the Harvey's of the world. You've got the contract AI. Those ones are all legal focused. Then things like Copilot, Gemini, those are really more broad-based general tools. And the way how they're sold, usually, you have a suite of products already. So I'm sure your company probably has a subscription to one of those or using one of those today.
And then there's AI tools that aren't meant for the legal department. They're there targeted to another department. So, your marketing team might be using another AI tool. And there's a way for us to fit in to all of that. And I can speak more to the marketing tool specifically and some of the work we're doing there. But it's not to say again that you can't use Copilot. It's just as Copilot going to be as useful for you marking up a contract as it is doing something general, like summarizing your email summarizing calendar, summarizing notes. So there's a space for all of them, but they all plug-in differently.
AL HOUNSELL: That's great. I find it really interesting how a lot of the AI you mentioned you're looking at is around things like contract lifecycle management, for example. And one of the things that I've seen, from the outside looking in is, as in my previous role, helping legal departments, automate things, incorporate technology, and that kind of thing, is that you've got all this hype around AI. So the GC or the legal leader will come and say, my CEO says we need AI, we need to be using AI. What AI am I supposed to be using? And can I do this, this, and this?
And then you say, well, let's look at your data. Where are your contracts now? Well, they're all over the place. There's some in this folder, some in that. How do you manage requests that come into your team? And these really low-tech things really, it's more around automation and workflow that a lot of the benefits of AI actually sit on some of these lower tech, things like contract life cycle management, which you've mentioned. So, we'll get into some of that discussion as well later. But maybe we'll just start with something that I'm sure everyone's really curious about. And Keri, you can kick us off on this one. What are some of the use cases right now that are actually delivering value for your legal department?
KERI WALLACE: So, as I said, we've been experimenting with different tools, different ways to use them. I'll start with just one of the tools that we've built out with our marketing team. Because of course, as in-house legal, we're support center. And one of the most important things, too, is that we don't want to use AI, but totally shift everything to meet the AI. The AI should be meeting us where we're at. So, I spoke about one of the AI tools that our marketing team was using. And for marketing, I don't know how many people in the room do marketing review. But for certain content, there's a lot of just repetitive tasks.
So, these are tasks where they're not strategic advice that we're giving. It's just a comms, email, something going out. And it's really we found we were giving the same repetitive advice what to say, what not to say, a do's and don'ts list really. And we found we had a lot of resources that said what legals do and don't list is. And we trained on it. But of course, not everybody reads your do's and don'ts list. I don't know how much people read in general anymore. But you have that out there. People aren't really looking at it. And then there's turnover or quite frankly, somebody just forgets they're writing something and they forget to reference your tool.
So, that for us was really-- it's death by 1,000 paper cuts because it's not like these things take a long time to review. But you have to stop what you're doing. And you have to turn your mind on to it, and you have to start working on it. And so we said, OK, that's some low-hanging fruit. What can we do here? This is got all the ingredients you need for AI to come in. And so, we went to use Copilot and some of the more general tools. And what we found was that the lift for our team, even with me having access to engineers that a lot of people don't. The lift to build our own tool out was it was going to take a long time.
And even then there were still things about managing hallucinations. And so, we met the marketing team where they're at in a tool that they were already interested in. And what we've built is essentially a spell check onto the tool. And so, what that meant for us is making sure that our SOPs and everything we had was in order and could be understood. It feeds into the tool. And now, as you're using it, if you're in the marketing team, and you're writing, it will highlight and then offer a suggestion. And so what we were looking for, I now know is a genetic AI. It's not Gen AI. There's different types of eyes out there. But we created an agentic AI tool that could really help with that.
And the other thing is that that's really meant for low risk. Everyone's going to have their own categorizations. But it's what we categorize as a low-risk tool. Even if you're writing about something that we think is high risk, then you can go through that. You still have to come to us. And we still have some of those reviews. But that's really on us to say, we don't think the business should be making the call on taking the risk here. And it's also important to talk to the marketing team and have them know, hey, when you're using this tool, you're taking on some risk to by going to the tool and not always coming to us.
But it also just quite frankly, speeds up my review. I'm sure everybody has times where you ask someone, where did you get this slide? And it's got metrics on it. And they're like, Oh,: it's a slide we used last year. It's like, no, we update those things. And so, this will just automatically direct you and say, where are you getting these numbers from? I noticed you're quoting these metrics. Did you use this approved document with metrics. And it just saves me from having to redirect them.
AL HOUNSELL: That's great. I think sometimes we think about the AI tools, and we look at them as though they're going to do the full job of a lawyer, which clearly they're not based on the experimentation we've all done so far. But it's those narrow use cases where we're seeing incredible value to avoid those death-by-paper-cut situations. It's great. Jonathan, how about you?
JONATHAN LEIBTAG: So, in terms of early use cases, I guess the challenges that I see in our legal department, and there are folks probably at the law firm, but who are in in-house groups here, too, is a matter of scale. I'm requested to be on tons of calls I can't make. And maybe I have to be on the call. Maybe I don't. But it's difficult for me to prioritize. And regardless, I can't be everywhere, every time. So, now, I'm going to be speaking about Copilot, obviously, but just as simple as teams and transcriptions of meetings. So, I no longer necessarily have to attend every meeting to get an update as to what happened, to get a summary, to be able to review the transcript.
Now, be careful. You don't necessarily want every transcript of every meeting. It depends what. And we'll get into compliance. And we'll get into different risk profiles of meetings and just how to manage it accordingly. But regardless, I'm able to quote unquote, attend these meetings without being there and opine and provide guidance. Regardless of me being unable to make the call, work across time zones. So, that's been super helpful for me. And it's been super helpful for my clients as well internally, because they know they have their lawyer engaged.
And it's not engaged with the expectation that I'm physically or virtually there at the time. And I can actually be very responsive as well. Another big facet of all of our jobs as lawyers is really the distillation of the complex into simple and to the actionable. So, we have all this data, all this noise would be research, or it'd be summarizing an agreement for a decision maker to understand the most important points and to drive their next steps or inform their actions. And as you work with clients, everyone likes to receive information differently in different forms.
So, I've gotten to a point like it used to take me forever just to sit down and look at all this information and think about, OK, how can I summarize it and put it into a form that would resonate with Joe or Jane? Now, I just stream of consciousness. We all know what we want to say. We may not know how we want to say it. That's the art of writing, that's difficult, and that's taking into account your audience. But now, I can even obviously record it using Copilot and speak out loud. But just stream of consciousness, get all my thoughts down, and then I can leverage these AI tools and say, can you summarize this in bullet form or in a chart?
I go as far as to say, I know Al loves football. Throw in a football analogy. Maybe it works, maybe it doesn't. But like unbelievable in terms of the output I get in form and in substance. And I can pump a lot more of those types of summaries out in a day and a week than I ever could leveraging these tools. And a quick comment. But the value I'll probably continue to reinforce this point is just keep iterating. So, how good your summary is of let's say the stream of consciousness is really dependent on your prompt. And that's an art form in terms of how to prompt the AI, how to craft it, how to keep on building upon it if you get something you don't like. So, once you keep iterating, you end up really getting so much out of these tools. And it's transformed the way I work, for sure. Just on those narrow use cases.
AL HOUNSELL: That's great. What I find so interesting about what both of you shared is it really lines up to two buckets, where I'm seeing a lot of value with generative AI as well. The first is that iterative conversation that you can have with generative AI, which Jonathan, you were sharing about. And this use case is a really low-risk one, because you're constantly monitoring the outputs. You're constantly providing the instructions in order to shape it into what you want. It's not using the tool to say, draft me this and then walk away, and then the tool is going to do what an associate-level person would be able to do.
It's talking with it, getting some creative ideas, developing your structure for what it is that you're trying to get at, and using AI in that way. And then the second category is where what Keri shared, you've got these really narrow use cases where you put a lot of constraints around what the AI is actually doing in order to get it to do something very specific and very accurately. And I think those two buckets of use cases are where it's at. It's all this middle stuff where we're expecting the AI to do stuff that a lawyer would be able to do, where people are getting a little bit frustrated, I think, with the outputs.
But these two low-risk iterative approach and then very specific, very constrained use of AI around a narrow universe of content to extract specific information, to flag specific phrases that come up. That's where we're seeing a lot of value for both AI and generative AI.
JONATHAN LEIBTAG: Can I just add a point? That it's all in terms of how you see it and frame it. And I think this is hopefully a point that lands with folks who are at the law firm. I always joke around as like, I leverage AI as my first-year associate to help me summarize a document to help me, summarize, research, whatever it may be. But the value is and it speaks to Iterating, as I remembered as a law firm, you always have to be careful. If you had in terms of you got output that you didn't like, how do I give feedback to this, young associate? I can be so mean to the AI. Hopefully, they don't come after me and whatever.
But you can be so specific and say, no, that's not what I wanted to get out of this. That is incorrect. You drafted it this way. I wanted you to draft it this way. And again, it's that ongoing use of the tools. But if you see it as such, not just this scary technology, but as a vehicle that can act a low-cost FTE to do first-year associate work, you're going to start using it differently. So, framing is super important. And again, you'd be surprised how you keep on getting better and better feedback by providing that feedback. And you no longer need to mince words in the same way that you would to the extent you actually have a young associate helping you out.
AL HOUNSELL: As a former young associate, I definitely remember there were certain partners that were really good at giving instructions that were detail exactly what you needed to do, and then come back before you get too far, show me where you're at, kind of thing. And then somewhere, you walk out of the office, and you don't know what it is you're doing, and you've got to ask other people and figure it out. The AI is not going to be able to respond that well in that second category. It needs that approach that you're taking.
KERI WALLACE: And it can actually be a good feedback tool to yourself. Because what you'll realize is you take something that you think is super clear and you feed it into the AI tool and you realize you have not been clear. You're clear within your team. You're clear with yourself. But if somebody is not a lawyer reading it, you think, oh, yeah, I did right. That's not as clear as it should be. And it actually makes you go back and update your SOPs and gives you another way to review your own work.
AL HOUNSELL: I think another thing that relates to this is, what kinds of tools are we using? So there's much hyped tools like what Keri mentioned, that focus on AI from the perspective of models that have been trained on legal content and legal documents in order to provide a little more tailored advice to the types of things that a lawyer might expect. And then there's general-purpose platform models where they serve every possible industry. And Jonathan, maybe we'll start with you. You've got a lot of experience with one of those general-purpose models and how you can provide that instruction to get legal outputs. And just wanted to see if you wanted to comment on this distinction between the legal specific and the general purpose models.
JONATHAN LEIBTAG: Of course. And it starts with me taking an audit. Sorry, of your own workflows and what tools plug into the specific workflow. So, certainly, Copilot is more of a general AI tool, not legal, specific, or industry-specific. It hasn't been trained with that specificity. But it's super helpful when I say do that audit. And Copilot, if you're using the Microsoft suite of productivity tools, integrates obviously so well into Outlook, into Word, into PowerPoint, into Excel, and into Teams, of course. So, it's certainly an amazing tool to help me with the more administrative portions of my day.
Not necessarily around, let's say, the legal analysis. So, if I were to chalk it up, I wouldn't ask Copilot, how should I be thinking about this? There may be jurisdictional issues, whatever it may be. But I can ask Copilot, how should I be communicating this to this audience? And that's just the distinction. And I'll put on my lawyer hat for the one analysis. I won't leverage AI for that as much as I would in terms of how do I frame the output? How do I organize the output?
AL HOUNSELL: Great. How about you, Keri? Your experience with the legal trained models and then the general purpose.
KERI WALLACE: So, the general purpose ones, I've used Copilot to summarize emails or just tell me where we're at in a project. You have so many projects you're working on. You can just write in, where's this project at? Is anyone waiting on me for something? It's great for that. But then if you ask it to summarize a contract, you're going to get a different output. Whereas the legal tools that are designed for that. And some are designed even within what we've been testing, some are really better than others, or somewhere you ask it for an executive summary, and it tells you about the governing law and the arbitration clauses.
And some know that in an exec summary, you're just going for key liability. And those ones are really the tool is using an gen AI tool like Copilot. And then they've built a language model that's specific for legal on top of it. So, theoretically, the way they built it, it should be able to pick up a liability issue. And the fact that it doesn't just live under the indemnity section, it's living throughout the document and be able to feed that back to you. And so some of those have been really helpful. Some of them, though, have a better tool where there's one tool that feeds into-- or sorry is fed by Cedar and Edgar.
And so. If you ask that tool to draft you, asked it the other day to draft me just a very simple guarantee, and it used JP Morgan, and just said, I went through JP Morgan's publicly available documents. Here it is. And so, those are actually two different tools if both of them hopefully everybody next year has worked together. And we've got a tool that can do all of these things. And they're not so specialized and segregated. But those are, they were each better at different things. And so, I think it depends on you're not going to find one tool that's going to work for every single use case.
And so, it's about picking and choosing. Of course, we have budgets, too. So, it's what do you have the budget for where what's free. And so Copilot for us. It costs my team nothing to use. So, I just use it. And then figuring out, OK, where do we want to focus our budget. Do we need the contract review? Do we need something that can mark up an NDA? And just make the decision from there?
AL HOUNSELL: You touched on something, Keri, that I think we've all probably noticed. And that is there are a lot of tools out there. And even within the legal space, there are a lot of tools out there. So, we chatted about just throwing up a couple of slides here, just to give you a bit of a landscape. Over the past year, I've probably done 50 CLEs with different in-house teams. And I always end it with this sort of landscape of tools that are out there. And everybody tells me at the end, these last few slides after it. So anyone here can have these slides.
Is this supposed to work? There we go. Categories of AI tools. And I know it takes a few seconds for the online viewers to be able to see this as well. So, the first category are these general AI assistants, which we've heard about. And the two big ones in the market right now are co-counsel and Harvey. These tools are focused on doing things like document review. I've got a contract. I put into it. I can ask questions about it. They're focused on extracting data and trends from large data sets. So, if I've got a due diligence project, for example.
Case preparation, that iterative back-and-forth approach. Summaries, timelines, a little bit of drafting, and knowledge content are what they're focused on. And again, we'll send these slides out to anyone who asks for them. These are not endorsements of the products by the way. This is just to give you a landscape of what's out there. The next is drafting-focused tools. A few of them are Lexis create used to be called Henchman, was recently bought by Lexis. Spellbook is one that's very heavy on social media advertising. So you've probably come across that.
DraftWise, Syntheia, ClauseBuddy. And CoCounsel Drafting is one that's rolling out this year. Legal research tools. Westlaw+ AI Assist, and Lexis+ AI are two big ones from the big vendors. Every law student coming out of law school in the coming years will have experience on both of these platforms. So, as people eventually make their way into the in-house context, these are things they'll be used to using. And then Alexi and BlueJ are a couple more narrowly focused research tools focused on generating memos and focused on specific areas of law.
Ediscovery is a big one. Now using generative AI to extract content from large data sets, Open task, text, DISCO, Reveal, and Relativity are a few of the big ones. And then in the in-house context, this is probably the most used type of tool. Contract lifecycle management tools now have brought in a lot of AI features. You've got companies like Lexion, which is an AI-first company, was recently the acquired by DocuSign, ContractPodAi, another AI first CLM platform. Ironclad has brought in a lot of CLM stuff. And then some other players like Agiloft, Sirion, and Malbek.
These are the mid-level CLM tools that smaller legal departments tend to gravitate toward. You've got some much bigger ones as well. The full-blown DocuSign, CLM tools, and other ones as well. So, that's just a bit of an overview of some of the main categories of tools. We've got this. But we're not at questions yet. But I'll just leave that slide up. We're going to move on to the next topic now. And that is adoption. It's one thing to have all these wonderful tools in our legal departments. But there are always challenges to get adoption around these tools in order to get the full value from them.
Keri, maybe we'll start with you. What are some of the challenges and some of the solutions to those challenges that you've seen in the adoption process for AI in your department?
KERI WALLACE: So, we talked about being prompt engineer and iterating on that. But really, if it's not working for you, it has a stall it. You prompt it once. It doesn't give you good feedback. You prompt it again. And it's still not giving you good feedback. You're done with it. You don't really want to be sitting there and have that become your new job. It's training the AI to be you. And so, really, one of the things that we look for is it has to really fit into what we're currently doing. So, a tool that we either somebody else is already using and we plug into that.
And if it's going to be used by our team, it just doesn't pass the muster. If we give it out to the whole, so we have a team that tests it at first, and then spreads it out to the whole team. And if the feedback is it's not useful, then it's just not useful. Just because it's called AI, it doesn't mean that it's got all this stuff behind it and all this marketing doesn't mean it's actually going to be useful for your team. And so, we really look out for OK. Am I prompting this too much? Am I giving this too much feedback? Is this just too big of a lift for us? And that's really been what we've been focused on.
AL HOUNSELL: How about you, Jonathan? What are some of the challenges to adoption?
JONATHAN LEIBTAG: And I often speak to customers of ours. And they have the same question in terms of, how do we get people to adopt? And my feedback generally, and we saw it just from personal experience when we tried to deploy at Microsoft, is you can't just turn on AI. You can't just turn on deployment and adoption. There are a lot of things that happen firts. First and foremost, and you talked about just this notion of data. Is your data organized enough, centralized enough in a way that the AI can actually pull insights effectively to help you with whatever use case you want to apply it for and apply it to.
And then you have to build a culture around adoption and deployment before people meaningfully deploy it. We talked about the natural skepticism that comes. And you have to demystify it a bit. And what we found is that people may have certain expectations as to the AI will help me do x, y, and z. And when they find out that, let's say, it doesn't achieve that initially, they just throw it to the side, throw it aside and say, well, this isn't as useful to me. So, also, into managing expectations as to what it can do and what it can't do.
And if you want to get deployment and adoption off the ground, it's best. And we're going to talk about this. Identify the super easy use cases just to build trust. So, maybe light touch not necessarily transforming the way you work, but helpful nonetheless. And if you play around with it, and you see it's helpful in this situation, you're going to be more willing to adopt it going forward on more complex tasks. So, when people again, just to summarize, when they speak about, how do I get folks to adopt? Well, what are you doing before set them up for success in that adoption?
I think that's a real missing piece. I think goes to the speed which organizations are pushing these people to use it. That's coming from executives. How can we plug AI into this in order to do that? But I think they're missing these preliminary steps that are super important just to build a lot of culture around exploration and use. And it's only being optimized the extent it's being used. So, how do you get people incentivized, and willing, and curious about its use? And if you do that, well, then the adoption will come.
AL HOUNSELL: I love that. Sorry. Jump in.
KERI WALLACE: Just to add to that, in terms of the tools, they are catching up very quickly. So, if you use a tool, and you find your over prompting or your team says, I don't want to adopt it, it's quite possible that three, six months later, whatever was bothering you has been fixed in the tool. The people who are creating AI Copilot being one of them. They're constantly iterating, constantly releasing new tools. So, if Copilot didn't work for you six months ago, I wouldn't write it off, because there's a chance that it. What was bothering you has been improved.
AL HOUNSELL: I think that's a great point. Setting people up for success, sharing these expectations that, OK, it may not be perfect. They're developing really quickly. And I think that's really important, especially when we're dealing with lawyers. Many studies have been done on the lawyer mentality. And we're not typically seen as very adept in that curiosity trial and error. There's oftentimes an expectation that, OK, how does this work? Just show me how it works. I want to use this tool and get the value out of it.
So, as you're thinking about bringing more and more AI, which right now is in a bit of an experimental stage. As you're thinking about bringing more and more of that into the legal department and being used by lawyers, there's two types of people that I think are really important in this adoption journey. The first type are your explorers. And maybe you're an explorer. Maybe somebody on your team is an explorer, somebody who just has a very high tolerance for that trial and error and experimentation, somebody that's super curious, that will spend three times as long just trying to figure out how to do something in a new way, rather than having to do that same thing over and over again in a repetitive way.
This is the type of person who likes learning new things, who likes putting in that time to do things in new ways. You need your explorers, especially for new technologies like generative AI, where we are discovering the use cases together right now. The second category of people you need, maybe this is you. Maybe this is someone on your team, or what I usually call the cartographers, which is the fancy word for map maker. So, these are the people that will come along in that journey with the explorers. They'll be excited about that.
They're not necessarily the ones to do that exploration, but they're the ones that can translate the use cases that are being discovered by the explorers to the rest of your team. So, as Jonathan was saying, you've got to set people up for success. You've got to fully explain this is the use case. This is how you use the tool. You got to map it out really clearly for your team in order to see some of the success in adoption.
JONATHAN LEIBTAG: And just on adoption. And I don't know how this would work necessarily at a law firm. But in-house, we're incentivized to adopt and explore. And we are, I wouldn't say, punish or disincentivize to not do so. So, it's become such a focus and an objective in order to get everyone ramped up and using this technology to make them better at their work, that there's an expectation that you're going to start using it. So, it's interesting because some people are, like at the end of your bio, like generated by AI and people chuckled at that.
There's not to say that it is, but people are almost like. Did you use ChatGPT for this? And you're almost like embarrassed. We have to remove that embarrassment if you're going to expect people to use it. It's a technology to help you optimize you. So, some organizations like Microsoft, we've actually incentivized. How are you using AI? Put it in your self-evaluation. You're evaluated against how you're using it productively. So, again, if you want to get adoption, of course, you have to demystify.
And depending on the culture. But if you're seeing a lot of skepticism, or hesitancy, or resistance, there's carrot and stick approach as well, which is super valuable because ultimately, if we have to do it or tasked to do it, we're going to do it. And then once we actually start using it, we start enjoying it and finding other use cases.
AL HOUNSELL: That's great. So, maybe on the other side of adoption, and promotion, and this desire of the organization for us to use AI, oftentimes legal teams are brought in to comment, or opine, or provide some guidance around the whole compliance issue. How do we comply with the use of AI throughout the organization and certainly within the legal department? Jonathan, do you have any insights into how should legal teams approach that compliance discussion?
JONATHAN LEIBTAG: Of course. And what's interesting about compliance, especially when you're rolling out such a powerful piece of technology organization-wise is there's compliance. But how can you enforce compliance? The enforcement is harder than putting together the compliance policy itself. So, in terms of how we've done it to focus, you can build your guardrails that are important. The value is always having a human in the loop. We hear that often this notion of human in the loop.
Super important. Super important from a compliance standpoint that you're always pressure testing, let's say the output. I spoke earlier about, let's say, transcriptions of call meetings. If you're in a potential dispute or litigation, maybe you want to think about whether that's a meeting you want to transcribe. Because all of a sudden, that's something that's discoverable. You're creating a whole boatload of new data, how you can put legal processes in place for those types of things. So, you build your framework.
Framework is not hard to build. But then how do you socialize it and how do you enforce it? And I think from a compliance standpoint, that's what we're finding a bit more difficult in terms of, OK, you build your compliance policy. Are you getting feedback from different stakeholders to help build that out? So, you're capturing the blind spots that you may have missed, or potential risks you're looking to mitigate against. But then how do you socialize that within your organization? How do you make sure that business leaders or leaders at law firms are cascading it down to the people that will follow them or listen to them?
And then at the end of that cycle, OK, we've developed it, we've socialized it, we've made it a mandatory training. Well, has it worked? So, you have to do a bit of an audit of your compliance program to see if it's worked. And then, of course, new issues come up every day of how quickly this technology is evolving. You have to constantly revisit that compliance policy based on the issues of the day and based on how the tools are evolving or what tools are coming out.
So if you want to foster a culture of exploration around AI. And then, oh, my God, DeepSee came out. And that's everyone's thinking about it. Well, hold on a second. Let's think about where that data is being stored compared to, let's say, obviously, the commitments of Microsoft to make or Google maker or ChatGPT in terms of data, location, and those types of things. So, all of a sudden, a new compliance matter came up as new tools come up. So, you have to constantly audit your compliance program to take into account the evolution and how quickly new tools are coming out.
How about you, Keri? How have you been involved in that compliance discussion?
KERI WALLACE: I think all of those echo all the things Jonathan just talked about. And then on top of that, as you're signing up the new tools, your procurement, and your vendor contract process is going to have to incorporate things that we already do. Everyone has security measures that you have to have in place. What can you do with the-- or what can the provider do with the data that you put in, and make sure you're going through all those tools at onboarding or all those checks at onboarding?
And then the other thing that's really important is making sure that when you are informing the team, when you are using the tool, you have to be logged in because you don't want to then say, oh, we've got the go ahead for ChatGPT. Somebody just goes to ChatGPT. They start putting that in. Well, that's not under the contract terms that the company signed. You have to be logged into our version. And so, just making sure that you're training on that, too, and making sure people are aware of when it's approved to be used.
To Jonathan's point, there's anything that's discoverable. You don't want to be putting that into the tools. But there's also some high-risk info that maybe should never be in a tool. So, if you're dealing with a high-risk topic, or you're a high-risk area, then ChatGPT is not approved for that use or Copilot or whatever the tool you're using is. And then really as the legal department to making sure that you have your own rules for whatever you're implementing. Because of course, you are going to be looked at as the one setting the standard. So, you want to make sure that you and your team are also complying and making sure that you're always logged in and that you're really following the gold standard of the rules that you've set.
AL HOUNSELL: That's really good feedback. And I find it so interesting the role that legal departments are in, where on one hand, you've got this pressure to adopt and use AI. On the other hand, you're identifying some of the risks for the use of AI to the larger organization. And it's a delicate wire that you need to balance on. I want to ask you both around, how do you frame that topic as you're discussing with executives? How do you show the value of AI to your executives? How do you show the value of the legal team in offering, maybe some of this cautionary input around the use of AI? Keri, if you want to kick us off on this.
KERI WALLACE: I think for us, it's really more the legal team that needs to get comfortable with it because everybody is they're hearing AI and nobody wants to be left behind. So, I would say actually the culture, at least at Uber Freight is very much bring us the tools. Like we want to know more about it. We want to hear these stories. So, it's less of having to show the benefit of AI because we're very open to it. But it is in the legal team saying, we do have to give this a chance, and we do have to think about it.
But a lot of that, too, is having people defined and knowing who you're dealing with. And so, we do have teams who volunteer to test out the tools. And within the tool, too, you can build on it. So, day one, it's going to look a lot different than day 7, day 14 when the more that you've put into it. And so, maybe when you introduce it to some of the people who you know aren't going to be receptive to it, you're not going to introduce them at day one. You're going to wait until day 14, and make sure that it's a little bit further along so that they're not on that journey, because it does. There are a lot of hiccups along the journey as you start to play around with some of these tools.
AL HOUNSELL: Jonathan, how about you? How do you have that conversation with executives around how AI has been used? How it's being used now? Some of the risks around it. How do you show value to the broader executive group?
JONATHAN LEIBTAG: Well, I think especially around AI and the introduction of technology into processes, generally, executives just want to know, is it enabling you to do less with more to the same standard as before while mitigating against the risks that come with it? And I think you can tell that story. But that's the framing of the story. So we're able to accomplish more with less in terms of we haven't necessarily, let's say, have had to hire. We're continuing to output the good work.
I would say even better work just because we're able to accomplish more and further develop this. As a simple example, I'm able to learn more quickly about emerging regulations, et cetera just because I can leverage these tools to help summarize. And are we able to build the compliance guardrails and boundaries around this whole thing to make sure that the investment is worth it? And the risk that may come don't outweigh the ROI that comes with, the ability to deliver more with less.
So, if we're able to tell that story, which I think we are, and to demonstrate how we're getting better, and we're further developing people, and we're continuing to develop people at all levels. And then ask the tough questions, well, what is it affecting? So, I've spoken to a lot of folks at law firms who have come up to me and say, well, how do you think this impacts training younger associates? Are they not doing that work that we all had to do when we started in law? Because they're relying over relying on the tools.
So does that impact development of your younger associates, your younger employees? Has it impacted culture? All those types of things. So, it's just making sure that you have a very realistic view as the pros, and the cons, and addressing each of those in a way to your executives that they're able to assess whether the investment is worth it.
AL HOUNSELL: That's great. Thanks so much for your feedback on that. We've got one more question for the group. We wanted to leave a lot of time for questions from the audience. So, hopefully, we've got a few that have come in online and then certainly in this room. If you want to think about what questions you might want to ask our experts up here. The last question, though, that I wanted to ask both of you is, what does the future look like?
We're experimenting with the current state of these tools right now and navigating our path for how that's going to affect the way that work is done in legal departments. Based on what you've seen so far, what do you think that means for the evolution of law within the legal departments and corporations? Jonathan, if you want to kick us off on this.
JONATHAN LEIBTAG: Sure. So, it's coming out now. And it's super new. And it's going to take a while to realize the promise that it's, in fact, communicating. But I'm very excited about it, Agentic AI that you touched upon. I see a future which is in not such a distant future where I'll be on a call with one of my executives going through. If you're thinking, let's say in an M&A transaction, a huge purchase agreement with tons of schedules. And he or she asks a question.
And I'm going to have Johnny law, my AI agent, on the call with me. And a question will come up to say, well, what do we say about this? Or where is this found in the appendices, or in the attachments, or the schedules? And I'll ask Johnny AI that question. And I'll get feedback as if I have that associate on the call. I think is going to come sooner than later. I think that's fascinating. We'll be able to train that agent in our image, so to speak, in terms of what we expect from the agent to support the work we do.
I think it's going to take a lot of exploration, a lot of failing before succeeding. But in terms of if we're looking out to the future, that's what makes me super excited, curious, little daunting as well. But that's almost out of a movie. But I think we're going to get there.
AL HOUNSELL: And just before we move to Keri. Do you want to give a quick definition or working understanding of what do you mean by Agentic AI?
JONATHAN LEIBTAG: I'm going to try my best. But essentially, you're going to create rules for an AI and train that AI on a specific data set that you want it to learn from. And you're going to be interfacing with it around a document, a scenario, whatever it may be. So, it's not as question answer as it is task completion, if that would make sense. So, it's that real-time interfacing. It's funny if we think about prompts and completions as so amazingly innovative.
I put in a question and I get an answer. I ask it to generate something, and it does. But it's almost archaic in the context of what we're talking about right now, because it's this manual input to generate the output rather than this real-world, real life, real-time interfacing with someone colloquially, conversationally to derive the responses, the outcomes that you're hoping to get.
AL HOUNSELL: Thanks so much for that. I think it's just important that we understand what the terms are. And we didn't plan that definition. But that was a fantastic explanation. Because I know we're doing a webinar. And I don't know who's listening. So I'll see my email.
AL HOUNSELL: I took a bit of a risk there. But Keri, where do you see the future of AI taking legal departments?
KERI WALLACE: I think we're so early on. I think we'll look back at this space and the things we were complaining about the things we were afraid of and just have a completely different view of it. I think the hallucinations, which is a big problem today, they're only going to get better. So, those are going to go way down a lot of those problems, we have right now. I think the prompting is going to become a lot simpler because the amount of prompting you have to do today, it's just too much effort. And the tools will learn off of those feedback loops. So, I think they're just going to become even easier to use. And there'll be more out of the box than they are today.
AL HOUNSELL: I would agree with both of you. And I think that theme of Agentic AI or another word we could use is integration into specific tasks. That's just going to get better and better. It's a huge breakthrough. The fact that I can put in a question to a large language model, and it generates a string of text in my language that reads as though somebody thought about it and produced a thoughtful response to that, that's great for me.
But the real benefit of some of these models we're going to see as they integrate into those tasks and into those workflows, is that the large language models can also generate machine-readable instructions. And that's going to allow a lot deeper task-focused workflows where the machine is responding to itself or to another LLM in order to do a follow-up task and thinking through the levels of how to perform that task, I think that's going to have a really big impact on the way that law is done.
And some of the use cases we're discovering right now, which I think is really fun, is the fact that as we're seeing what these models are good at through this manual process of question and answer, the vendors who are providing these tools are seeing the types of tasks we're focused on, and are then able to tune the software in order to get more narrowly focused on what output we're actually wanting from these tools. So, it's going to get a lot better. They're going to be a lot more useful.
And now is really the time to start experimenting and getting a feel for what it is that the models can do through this supervised workflow. Great. So, why don't we open it up to questions both-- I've got to switch iPads here. Questions from our online group. And I just hit the screen. Sorry about this. And lost the tab. Here it is. So, questions from this group. Please feel free to just put up your hand. We have people with microphones that can run out and listen to your question. And we'll also take some from the online chat as well. We've got one here.
AUDIENCE: I'll pick up on hallucinations. The thing that freaks me out the most is in the difference between, to me, generative AI and a first-year associate is a first-year associate is not going to make up citations out of whole cloth. Whereas generative AI thinks that that's the way that you answer the question because you make something that sounds like it's a real legal document. And so when you're proofing the things that come out of the AI tools that you use, how do you guard against that sort of thing?
KERI WALLACE: I can take that just from a use case I was trying to do, which was a chatbot that could answer the very basic questions that every sales team comes up with. So, what's our legal name? What's our address? All these things that are just very simple. And as I was building the tool, I asked the tool who's working on this? And it said, like David. And there's no David on our team. But I guarantee you, if it's on this, they'd be like, David, what's going on with my contract?
So, the decision there was just it's not ready for prime time. We're just not going to send that out to sales team knowing that they're not going to check the hallucinations the way we do. But I also know that the legal team does check hallucinations. So, maybe the better role for the chatbot isn't to be sales-facing. But for us to have a chatbot where we can copy and paste the sales question into the tool, and then we don't have to look up something like our registration number. It will just spit that out.
And if it says 1, 2, 3, 4, we know enough to be like, that sounds incorrect. So, some of it is, and as I was saying, I have a call actually Thursday with the Copilot, a team member reached out to say, I think I can help you solve this problem. But at least for the immediate term, we just said we can't use it that way. It's just too risky. But let's give it to a team who actually will check these things.
AL HOUNSELL: Great. We have a question online asking, is it not the case that law firms generally prohibit through their AI policy? The use of these tools in the manner that the panelists are describing. For example, drafting reports, and emails, providing summaries, all of which would involve putting in client-specific information into the tools. So, I guess I can answer that one. And then maybe on the more general side, this is a very closely related question, which is, how do we protect data that's confidential, private, or classified when using AI in our work? If you want to think about that one.
So, I'll start from the perspective of law firms. As you're working with your outside counsel, you really determine what the guidelines are for the tools that your outside counsel will use. So, for example, many outside counsel guidelines, we're seeing more and more where clients are asking for the use of these tools in order to bring down costs, and improve efficiencies, and that kind of thing. We're seeing that a lot more. There are rare exceptions where clients will say things like, we don't want you to use generative AI in any of your work.
We're seeing that fewer and fewer. But in terms of what large law firms or any law firms are using in terms of these tools, most law firms are deploying generative AI and AI tools for the work that they're doing. The real question is around the data security. You want to be absolutely sure that the data is secure, that the data is controlled, that the data is resident where you want it to be resident. And so, a lot of the negotiation that we have to do on the law firm side is to ensure that the vendors are upholding the very high standards of security that law firms need upheld in terms of the data that goes into these tools.
So, that's what we're seeing across law firms now. Obviously, the in-house teams determine the way in which law firms can perform the work that they're performing for them. But the data security topic is something that's very, very important to every law firm that you might work with.
JONATHAN LEIBTAG: I think that's right. I think you have to vet every vendor according to their terms and conditions and the commitments they make in terms of adhering to international standards of the ISO and SOC, what they will do or won't do with your data, where it's going to be stored, how it's processed, et cetera. But a lot of these tools also have configurable controls within them. So, for example, Copilot, you can make sure you configure, let's say, the chat where it's only accessible by a few people. And you apply sensitivity labels.
You can add a data retention policy to it. So, there are all these toggles that you can turn on and off that you definitely have to explore as well. And this is all part of that compliance policy that you build and you have to train your folks up on. But the most important thing to do is to vet each vendor according to the risk tolerance of your firm, of course. And historically, it's always an interesting conversation as these things come up. As the person who wrote the question in the chat is, I've been at Microsoft for 10 years now.
And all these concerns were the same concerns around the migration to cloud. Are you crazy? We're going to put the firm's confidential information on servers that aren't on-prem in our building that we have full control over. And it was that same. And it can come up with every innovative new technology. It starts with fear and resistance. But once you dive into the controls, and you vet the vendors accordingly. Listen, if it's Copilot or if it's Harvey or someone else, they're investing a lot of money in getting clients onboarded to use their products and services.
I can tell you the biggest customers, let's say in Canada, would be governments and banks. They really care about their data. The future of Copilot in Canada is dependent on making sure our governments trust it, and our banks trust it. And the best customers to go after earlier the law firms. Because if the law firms trust it, then of course, the banks are going to trust it. But there's definitely things to look for and to create a compliance policy and around these types of things. And the best place to start are with the vendor terms and whether you trust the vendor.
AL HOUNSELL: I think an important point to inasmuch as you're involved in these sort of compliance discussions, is you really want to frame the discussion in the broader themes that you see throughout these innovative technologies. At one point, email was a super controversial topic. If you think about the security of email and all the potential difficulties with that, if I were to list those out to you, you'd think, how is it that we're communicating by email all the time? What? We're sending legal documents over email?
And this was a huge topic of discussion at one point. But eventually, the benefits of this technology just became so overwhelming. Can you imagine having to lick an envelope and put a stamp on it in order to communicate? The benefits are so big that the world just changed. And people stopped looking at it the same way that they looked at it before. Same thing happened with cloud. In a few years time, AI is going to be everywhere. It's going to be on all of our phones. It's going to be connected to every application that we use on the computer, and it's going to be inescapable, the use of AI in all these different areas in the business landscape.
We have another question. And I'd love to hear actually, the in-house take on this. And then I can provide the law firm thoughts. And that is, can we discuss the impact of AI on billing models? Presumably, AI tools might make lawyers more efficient at what they do. Will this make hourly billing become obsolete? I'd love to hear what you guys think from the in-house team. How are you approaching your outside counsel as you think about the use of AI in legal work?
JONATHAN LEIBTAG: It's interesting. I think we'll see. People have been talking about the billable hour and is it going to go away for a long time. And it's still here. I think models are changing. You hear a lot in the selling world about outcome-based selling. So, is billing going to be outcome-based rather than billing-based? I don't know what metrics there are to derive at what outcome-based means. But it's an interesting question that I'm frankly not having with our outside law firms. I don't manage that.
I imagine these were the same conversations when LexisNexis came out and all these types of things that you didn't have to go to the library and spend all day there now. And now, you can do some research online. How is that going to impact billing? Can you pass those costs of subscription costs to the customer or to the client? So, I imagine these similar considerations will arise around AI.
KERI WALLACE: I haven't had those conversations. I don't think we've taken a stance, at least from my team on that. I do agree, though, the billable hours been under pressure for so many different reasons, for so many different tools and teams. When I go to external counsel, I would expect that they're using AI the same way how I am, which is just as a Copilot. If I'm going to external counsel, that means it's such a nuanced, difficult question that I can't answer it myself. And so, I'm not expecting that you plugged it into ChatGPT and got an answer.
I imagine that you're really using that tool as a Copilot. So, it doesn't make you more efficient. I can't tell you that because AI is out there. Instead of taking an hour, it should take 30 minutes. I don't know the answer to that. But I think teams have been sensitive to their billings for a while. And so, it's really on the law firm to see if that meets their strategy to bring down costs or to have a flat rate or to have some other project-based billing. But I don't think that's specific to AI. I think that pressure has been there for a while.
AL HOUNSELL: We've definitely been hearing about that for a while. I was at a dinner a few months ago with a number of in-house lawyers. And this topic of conversation came up. And what I found really interesting is that they said many of the law firms that they work with always offer a couple of different ways of billing. One is the billable hour, and the other one might be outcome-based, or a fixed fee of some kind. And almost invariably, the in-house teams that were represented there were saying they go with the billable hour.
They like the fact that they have the choice and that there's that thought that's gone into it. But they've often been gravitating toward the billable hour. What I think is interesting on the law firm side, though, is the capability of AI to begin to understand our data better. Because that's really where in other industries, where it's typically fixed fee or outcome-based pricing. A lot of data goes into that in order to make informed decisions around what the pricing should actually be.
It's not finger to the wind and making a guess, and hopefully it works out. You've got a lot of data that supports that. And the capabilities of AI to do things sift through massive amounts of data. Looking at what is it that people are actually doing in creation of legal work product in order to build much better estimates is something that I find really exciting. And maybe we'll see some movement there. We've got these competing considerations on that front. Any other questions from this group? We have gone through our online chat questions. Any other questions in here? There's one over there.
AUDIENCE: Give me an example of a marketing situation where it becomes ripe for using AI. And you mentioned, I think, you said ingredients that you need for AI to be helpful. So, I mean, you've gone through some of the easier examples. But what are the ingredients to a task where you're like, you know what, this is for AI?
KERI WALLACE: I would say the key things there was repetitive. So, it wasn't strategic legal advice. It's a do not use. So, we're comfortable giving that. And it's the same every time. We also have a use-with-caution kind of a yellow. So, sometimes you can use it. But look at the playbook. And it's very simple to just redirect. It's just saying, here's a link to the SOP we're referring to. And it's already been there. You've already been trained on it. So that's one thing.
The other thing is not being strategic. It's very easy to say if they write this, then do this. So, it's very if x then y, as opposed to in an M&A deal, you're not going to ask it to review a purchase agreement because it's just too nuanced. There's too many things going on in there. And the other thing is it's something where both teams can use the tool and check it. And there's a reliability there. So, in terms of that particular use case, it's really a flag.
So, that when it comes to my team, instead of going through one or two comments, maybe this just goes through one round of comments. Because that first round of, are you checking the right metrics? Are you using phrases that's already done? So, it just makes the review faster. It's one round instead of two or three.
AL HOUNSELL: Jonathan, any thoughts on that? What makes a really good use case for AI?
JONATHAN LEIBTAG: I think redundant and repeatable where the question answer is pretty black and white. So, for example, we get pinged all the time on policy guidance. Well, we have our policy. There's so many policies. And people are to quite frankly too lazy to read it. So, they reach out to the lawyer saying, can I do this? Can I do that? So, we just created a self-help tool that we feed it our policies. And we don't vet it. And that goes to training and culture.
And they're still accountable to the decision they made. But if you have a question around policy decisions, you're discouraged to reach out to the lawyers, go to the self-help tool, AI-powered, fed by our policies. And it saves us a bunch of time. And that's pretty low risk. Because again, the policies are pretty clear on its face.
KERI WALLACE: I would say a good example of that for in-house is gift and entertainment. Christmas time, I got this. That is ripe for AI tool. That is a yes or no question. What's the value? The policy says this.
AL HOUNSELL: One thing that I normally share around the best use cases are where you've got three things. One is where you're looking at a narrow universe of content. So, for example, a policy or group of policies, a particular contract, a narrow universe of content where you're asking black-and-white questions like what Jonathan just mentioned. So, I want relatively simple or straightforward information from this narrow universe of content. And I need to do it a whole lot of times. It's highly repetitive. I can scale it across multiple instances.
So, in the law firm context, the gold standard here are things like due diligence. I'm looking at one contract at a time. I want these specific things. And then I need to do it for another 5,000 contracts, that kind of thing. Where you've got those three things together. Those are the best use cases for what generative AI is able to do right now. And there's been talk about AI lawyers. This one just came in a second ago.
Based on your testing of current AI tools, how far are we from having an AI lawyer. Instead of being merely a tool, when will AI replace lawyers, if ever? This is an interesting one to maybe end on, actually. So, maybe we'll hear from each of us. Are the robots going to take over? Who wants to start?
KERI WALLACE: I can go. I think first off, there's a lot of Law Society rules on these. I think we end up so like Terminator caught up. But there are actual rules when we practice law. And there's things that you have a responsibility to do and the citations you have a responsibility to look into. So, if you find a tool is hallucinating, it's your duty to make sure that hallucination isn't happening. So, one like, will there be an eye lawyer? I think the Law Society has to have an input on that.
But in terms of how far away the tools are, you still need that human intervention. It's not answering a customer service call. There's a lot of nuance to it. So, maybe, do we get to a world where you're comfortable putting your NDAs through the AI tools, And everyone has an AI tool, and we just have a standard? Maybe we get to a point where you're comfortable doing that. But can it be the GC of a company? No. Just frankly, I don't see that ever happening.
AL HOUNSELL: Jonathan, are the robots taking over?
JONATHAN LEIBTAG: So, I'm going to say never. And I'll frame it this way is, who do you want to be to your client? At the end of the day, it's a buzzword. But we want to be that trusted advisor. I want to be the individual who my executives go-to to provide advice on material decisions. And material decisions aren't black and white. Material decisions require judgment, intuition, and experience.
And there are a lot of factors that inform whether to go left or right. And I think the AI will only get better at helping us identify those factors. But ultimately, it will be a judgment call based on all sorts of factors, internal, external, personal, even emotional, market conditions, all sorts of things that I hope executives look to their trusted advisor, their lawyer for advice.
And regardless of how powerful those tools get, I think someone still wants to pick up the phone or meet face-to-face with someone and say, what direction should we go into and why? And again, I think judgment is our premium offering as lawyers. And I think as long as we keep focus on that being our premium offering, we'll only get better at that as we can spend more time not just building rapport with our executives because we don't have to do the mundane work advising on policy decisions.
We can leverage AI to build up ammunition and repertoire of knowledge. But then we'll take all of that to further buttress ourselves as that trusted advisor. So, If anything, I think AI will make the lawyer more valuable if we think about it the right way.
AL HOUNSELL: I would agree with those sentiments. And something I say a lot at this juncture is that we have to make a distinction between tasks and jobs. Absolutely, AI is going to be doing certain tasks that right now lawyers are doing. But what that allows is it allows lawyers to now move up the value chain to handle more complex things, to handle greater volumes of legal work, to be able to provide that judgment, as has been said, in an increasingly complex world that has a lot more data, has a lot more complexity, has a lot of new interesting legal problems that arise from this AI-infused world.
And so, we're constantly going to be needed for that kind of thing. It's just the specific tasks we're doing are likely going to change in the future, as AI is able to do some of those tasks that right now require that human element. So, I'd like to thank our panelists for sharing with us. Thanks so much, Jonathan and Keri. And I'd like to Thank everybody that's attended here in person and online. Please feel free to send any emails our way. If you're interested in any of the slides, we'd be happy to send those across to you as well. Thanks, everyone!
But how can legal departments effectively leverage AI? What are the real-world use cases driving impact today? And how can legal teams balance innovation with risk management and compliance?
Join us for an insightful panel discussion in this on-demand webinar featuring legal and AI experts as we explore:
Practical AI applications for legal departments, from contract analysis to legal research and risk management
How AI can streamline workflows, enhance decision-making, and improve client service
Key challenges, including data privacy, security, and ethical considerations
Best practices for adopting AI tools while ensuring compliance and mitigating risks
This session provides actionable insights for legal professionals looking to navigate the evolving AI landscape. Whether you are just starting to explore AI or are already integrating AI into your workflows, this discussion will equip you with the knowledge needed to stay ahead.
Law Society of Ontario: This program is eligible for up to 1.5 Professionalism Hours.
Law Society of British Columbia: This program is eligible for up to 1.5 hours of Practice Management content.
Barreau du Québec: This program is eligible for up to 1.5 hours of continuing professional development.
Law Society of Alberta: This program qualifies as a learning activity under Domain 5: Practice Management (Competency 5.6)
This organization has been approved as an Accredited Provider of Professionalism Content by the Law Society of Ontario
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.