Timothy C. Bailey, PhD
Partner
Patent Agent, Technology Lawyer
On-demand webinar
ZAFFAR JAFFER: So today's session is part three of our AI on the horizon event series titled Commercialization of Artificial Intelligence. I want to welcome everyone here in the room and the over 300 people joining us virtually.
We hope that this will be an informative and engaging discussion. The format for today is we're going to have our panelists provide brief presentations. And then we'll have a panel discussion that I will moderate.
And we encourage your participation. So please send in your questions, and we'll try to get to as many as we can in our time together this afternoon.
So to my left right here is a partner from our IP group, Tim Bailey. To his left is VP of Growth for North America at Blackbook.ai, Robbie Butchart. And to his left is the Global Head of AI Solutions at Dell Technologies, Rob Parrish.
And my name is Zaffar Jaffer, and I'm the Head of the Corporate Group and a partner here at Gowling as well. So with that, I will turn it over to Rob Parrish.
ROB PARRISH: Good morning. As I said, Rob Parrish, I lead our global team for AI Solutions at Dell Technologies. So often people ask me, what does that mean? So Dell, to most people, we are providing products like laptops and infrastructure products and everything else. And people say, Dell for AI? How does that make sense?
So our team is focused on-- we work a lot with our partners. So folks like Blackbook, as well a lot of other software technology companies, as well as services companies, to build AI solutions.
So a lot of customers have been struggling with concept to build versus buy. So what Dell's doing is, yes, we have a lot of market-leading AI infrastructure products that they're only as good as the software and the services that go and deliver AI solutions on top of them. So that's where we're working with our partners to deliver AI solutions to market.
So what we're going to talk about today-- we've got the clicker working. Maybe we'll be going to mouse. Here we go.
So just to cover off where Dell sees the challenges in the market-- so as we look at the hype cycle of gen AI and where people are today, where consumer-grade--
AI, I think, is pretty prolific in the market now in terms of ChatGPT and solutions from Google, with Gemini and Claude and Anthropic, and you can name all of them. Product or the consumer-grade is fun for people to use versus when you get to corporate and commercial. How do we enable this that's going to actually deliver ROI for the business.
So where we typically add a lot of our enterprise customers that we're looking at AI, typically, it was the FOMO effect, Fear of Missing Out. We need to make sure we're up to date with this technology, so we can figure out how we're going to use it.
So now they're coming through this trough of despair, as we call it, to say we need to stop just throwing capital and budget this and start figuring out which of the prioritized AI use cases that we're actually going to either make us money, save us money, or avoid risk. Typically, it comes into one of those three. So how do we prioritize those?
The second challenge is around the expertise. So when we've gone from traditional AI to generative AI, and now we're in the world of agentic AI, those architectures are getting more and more complex.
And for organizations that don't have the skill sets, what does that mean for us? How do we pick or choose the technologies that are going to enable us to deliver this AI capabilities to the business?
And then with that is the cost. So as we start looking at the cloud-based services versus procuring and developing that on-premise for our customers, most of that balance now is shifting to, this is becoming expensive to do it in the cloud, we're putting all of our data there-- that becomes expensive. We're trying to run all these different AI use cases, which turn into-- you'll hear the terms "tokens."
So in the cloud, the cost is associated with how big is the response AI is providing. The size of that response is broken down into token cost. And that token cost is starting to become too difficult for customers to justify.
We're going to continue to do AI development, as well as production applications running in the cloud. So they're starting to look at, how do I bring that on-premise?
And that then brings the challenge of, well, do I have the skill sets in-house to go and run it on-premise? Or do I need a service delivery partner to come and help do it? Or do we go and hire people to do that?
And the challenge, I think, to a lot of organizations is, they may have some AI developers or data scientists, but running that at scale requires a different operating model. And how are we going to maintain these systems?
So again, why we've come to where Dell is now going, which is how do we build platforms, rather than just offering a bunch of the different tools that people have to go and plug together and integrate themselves; how do we offer an AI platform that organizations can quickly develop more and more AI use cases, rather than looking at all different tools for every single use case.
So what's Dell AI strategy within Dell? So we have this concept of AI IN, FOR, ON, and WITH. So we're developing AI capabilities in our products.
So everything from AI-powered PCs, where we're building a lot of the technologies of AI into the laptops, workstations, as well as our data-center based products, as well as edge-based products.
So we have products that run in things like ruggedized environments, deep-well rigs to outdoors in hot temperatures, cold temperatures, where customers want to publish applications out to thousands of locations without having to do a truck roll of taking out the engineers out there to deploy them. We ship them straight there. They plug them in. And then they can do remote deployment and management.
To then AI FOR Dell, which is how we're doing AI internally for ourselves. So we went through a concept knowing 900 AI use cases, which was just untenable down to now we're delivering 12, because we went through the exercise of use case prioritization to say, we're not doing anything now that doesn't give revenue generation, reducing bottom-line cost or avoiding risk.
AI ON is how we're developing AI capabilities on top of our products. They're more software-based. And then AI WITH is with our partners, like Blackbook and others, where we're developing AI platforms and solutions with our partners.
So in terms of the adoption approach of what we're typically seeing, it's less about what are all the technology solutions, because the choice is getting a little exhaustive. You can either choose these solutions in the cloud. We've got a lot of software providers, the services providers. There's folks like Dell.
The priority is looking at, what are the use cases that you want to prioritize for your organization? How do you get to a structured process of how we evaluate what AI capabilities are either going to improve business process, where we can highly automate? Or it's going to make us money, so we can actually deliver an AI capability as a service that we can offer out to our customers.
So prioritizing those use cases, both based on business return. And then looking at the technical feasibility of delivering them, like one may deliver a great return or a cost saving. But if it's going to be really complex to deliver over multiyear, is it still a feasible use case?
And then the final piece is data. You don't have the right data, there's no point in looking at any of the complex AI capabilities. Without the right data, the accuracy of AI is not going to deliver the return you're looking for. Otherwise, you might as well be just using consumer-grade AI in the cloud.
So I think it's those three understanding what the business needs and prioritizing it, making sure that it's technically feasible. Do we build or do we buy? And do we have the right data or data quality to be able to deliver against that use case?
Then we can start to look at, do we have what expertise do we need to go and deliver against these prioritized use cases? Do we need a partner like Blackbook? Do we have the skills in-house? Do we need a bit of both?
Once we understand those, you can start to build the workflow of how we're going to deliver a project plan to deliver against this AI capability and what the resulting KPIs are going to be. Then you can look at what is the right technology to deliver against that workflow.
So often people jump to the technology first and say, we're going to deploy a bunch of AI infrastructure and some AI tools. Now, go and build AI. It becomes a bit of field of dreams. If you build it, they will come. Well, they're not, because nobody's wanting to use the platform in that way. You've got to look at the AI use case from top down.
So Dell did exactly the same thing. We had everybody looking at every single tool going. And fortunately, we had some infrastructure we could probably use because we kind of made it. But what happened was we had these 900 use cases, and none of them were actually going into production. It was all pet projects.
So the whole thing changed to a top-down approach, all the way from Michael down to introducing a chief AI officer that would actually put some kind of structure, operating model, and control over which AI use cases were we going to do for Dell, as well as then how we could then offer them to customers. So that was the big shift for Dell, probably six years ago.
So once we look at those, and then we can identify the right technology. So do we go and buy? Do we go and build? And then we can make sure that technology, one is going to deliver a return. But then how are we going to maintain that platform?
Because there's one thing-- delivering it first time into production, you've got to maintain. The models are going to keep upgrading. The software that they're running on is going to keep upgrading.
The infrastructure is going to get end of life. Are you going to refresh it? You need to have a lifecycle model to continue to develop those applications, certainly while they're still delivering business value. And then that's that final piece is, how are you going to measure the KPIs?
So if we're looking at an AI use case that over three years is going to deliver this return on investment, how are you going to measure that? If that starts becoming five years, well, is this really giving a return against the original strategy for delivering these AI use cases?
So that's the approach we now take within Dell, both from our professional services, as well as our services partners, as well as our software partners, as well as how we're developing AI capabilities on our infrastructure. So we look at use cases, expertise, deployment.
AUDIENCE: It has to arrive first to disrupt, right? I just wanted to ask a question in terms of AI technologies. I've been seeing this in the workplace due to different-- [INAUDIBLE] in the real sense, they just have some data that is somewhat structured. They use API calls just to customize whatever model you're signed up for. That's the simplest way, just buying back credits and have to until they make--
Then I'm seeing very costly data, that's property structure of that, and everything else. But they're still using some third-party technology to create whatever they want to train that data. And suddenly, it was really useful and something that's possible to commercialize.
But they're dependent on the capabilities and the evolution of the third-party technology. And the third adjacency is a custom-built model, like their own models [INAUDIBLE].
Using some sort of machine-learning approach to train their own models. But most of the time, they're going to fail because they can't afford to use. And even if they can afford them, [INAUDIBLE] how to manage the lifecycle [INAUDIBLE].
So what you're talking about [INAUDIBLE] builds the AI platform [INAUDIBLE]. And I guess the context is, what exactly you mean by that? Do you mean API calls or platform? Do you mean data platform? Do you mean the data--
ROB PARRISH: So I think everything you're talking about is more the technology workflow to delivering an AI application. I'm talking more at the business level of why are we even trying to use AI.
AUDIENCE: All of that.
ROB PARRISH: Because then when you get into why are we trying to do AI-- so I'll give you an example. We typically put them into four categories of AI assistants. So there's people doing coding assistants, where we're not using developers anymore, where we can actually use AI to do the coding.
We just give it prompts in human language-- in English or whatever language you want to speak. And it will convert it to code. So there's coding assistants, there's digital assistants, which I think is typically what everybody uses.
If you go into GPT, it's basically like an advanced AI search engine versus using a Google. A lot of people now will use AI rather than using Google anymore. In fact, if you go into Google now, most of the answers will come back with an AI answer first.
So there's digital assistants. And digital human is another one. So you're actually creating an avatar, so it feels like you're talking to a digital human.
Then there's intelligent automation, which I think is where a large proportion of the use cases for AI, which is how do we automate business process. Rather than gen AI, this is more predictive AI. We're doing a very prescriptive task using an AI machine-learning model to go and do that task.
And then we have digital twin, which we've seen a lot in folks like manufacturing, even in oil and gas, where we want to create a digital twin, either we have a product twin, composite twin, or process twin.
So product twin, like we do with McLaren Racing, for example, they've created a product twin of their race car. And now they can do predictions and say, what's it going to look like on this track? If the wind changes direction halfway through the race from this direction, what does it do to the airflow over the car?
They can now do that digitally with a digital twin. Then they can put it in the wind tunnel, where they only have a certain amount of hours, from a regulatory point of view to use it. They can do more of that digitally now, and it's now the championship winners.
AUDIENCE: [INAUDIBLE] between prediction and reality, the wind tunnels reality, right? So there are--
ROB PARRISH: So all of the digital twin is based on accurate data from the wind tunnels. So that the digital twin has to be accurate. It's a bit like when people use it for a factory, it has to be exact dimensions of the factory.
So BMW, for example, where they've done a digital twin of their factory, they're using gen AI to say, what's the most optimal way to move the parts around the factory?
And AI is then doing simulations to say, this is the most optimal way to move using automated robots in the factory to bring it to the builders in the most ergonomic way, i.e., they turn around, it's right at their level, and it moves it around. So they're using AI to do all those predictions.
AUDIENCE: Is there an observation [INAUDIBLE] in the type of definition of [INAUDIBLE] that are acceptable, or is all that skipped?
ROB PARRISH: It depends. Sometimes, yes. Sometimes it has to be absolutely pinpoint-accurate. So for BMW, for example, if that's a meter off, it could bump into something else in the factory. So they have to be within an inch accurate for the robots to be moving around the factory and how it can predict it.
Otherwise, it could say yes, do this. They go and implement it, and everything starts banging into each other. So there has to be a level of accuracy that goes into it.
And then the final one is physical AI-- so robotic automation, how people are using robots and using AI to manage how the robots are actually going to deliver something, everything from-- in factories to autonomous vehicles to, you name it.
So typically, all use cases fall into one of those categories. But delivering them-- and why should we do those? Why do you want to have a digital assistant?
We have customers. They want to deliver a digital human. For what? For customer service. We want to put it in our branch. And so when people walk in, they can talk to a branch manager. I'm like, what return does that give you? Is it going to give you customer satisfaction? Can you quantify that? Is it worth the investment?
It's not a cheap investment, just to have a digital human on a screen as you walk in the branch. Is it really worth it? So that's what we mean by use case prioritization. Can you quantify it's going to deliver a return, either in revenue or in savings or in avoiding risk? Then you can figure out the workflow of how you deliver it afterwards in the feasibility.
Good. With that, I'll hand it over to Robbie, who can go into more of what we're doing with partners and how we help accelerate some of those use cases.
ROBBIE BUTCHART: Yeah. So my name is Robbie Butchart. I'm the VP of Growth North America for Blackbook.ai. We are new into North America, so I want to go through and give a little bit of an insight on who we are and where we've come from, give a little bit of background.
And we'll jump into how we help organizations. And it's really a premise of what Rob was mentioning-- we need to define the use cases. So we'll go into a little bit of that.
And then it's great to talk about the outcomes and what we get to. But there's a lot of questions on how. How do we do this? And that's where we see a lot of confusion from folks that we chat with. So we'll jump into that quickly, and then pass it over to Tim. And he'll jump into to the rest of it here.
So we've been around, had some great growth in a very short period of time here. Been around for about 10 years, have 200 plus staff. We are a global organization station headquartered out of Brisbane, Australia. We have locations out in Vietnam, Manila, Singapore, Costa Rica, and now into North America, with members in the US and in Canada.
So for us, the organization started off of RPA. So RPA, a little bit different than what Rob was mentioning, Robotic Process Automation, where you're automating manual tasks within organizations. So less roboty and more software-based and really started cutting its teeth into that space. The evolution to it was full automation systems, big data requirements.
So when we think of what's required to get to an end-state of an AI-enabled organization, the requirement is a multitude of subsets that are required to support it.
So we think of all the data silos that we all know and love. We've got multiple different back ends. Data resides in multiple locations and areas. How do we manage that?
So we have teams that actually help support all of that. We've got full automation teams, not only the RPA but ground up automation work, as it's required in some instances.
Think of the manual tasks we all do. Think of why we all continue to use Excel. A lot of those things need to be brought into the data repositories. They need to be automated so that we've got the proper data to make intelligent decisions within the business.
We also have a number of low-code/no-code teams-- so when we talk about Power BI and the Microsoft Suite, that's all the low-code stuff, and then when we start thinking about how we evolve it into full software development requirements.
So whether there is unique project, bespoke project work, that has to happen, we've got teams associated to that, plus delivery and development-as-a-service.
So if we have clients that are really gung ho on a project they've got, they know they have in-house staff, and they've got expertise that allows for them to get to that outcome that they're talking about, we've got teams that we can deliver and be part of the execution. And we have a follow-the-sun kind of model when it comes to development, which allows for us to deliver four times faster.
So pretty exciting for what we're able to deliver at the end of the day. And the roundoutedness, it's not a word, I'm going to use it anyways-- allows for us to really come to the table with unique proposition.
So Rob mentioned it earlier, and I don't really love a lot of words on slides, so I apologize for the bubbles here, but the baseline of it is that everybody wants to have conversation and implementation of AI within their organization.
Everybody's talking about it. There's multiple sessions like these globally that are happening, and Canada is far behind when it comes to implementation with AI. It's actually astonishing at how far behind we are.
Agentic AI is a platform as a ServiceNow in multiple countries, and we're still talking about what it is here. So it's really interesting when you start thinking about this. And when we talk about how we get to those end-states, there's a lot of organizations that might not even need AI.
You might not need it. It might be nice to have. There could be some great business cases, but there could be some phenomenal use cases and outputs based off of automating manual processes.
You can get great returns on simplifying some of your processes and bringing technology into the mix, bringing data in to make better decisions and have it consolidated to give you more insight.
That could be a game changer, just for organizations. And it all goes back to what Rob mentioned, these use cases and the ROIs, based off of what the outcome is that the business wants to solve for.
We don't want to implement AI solutions or anything AI-related if it's cool, and that's the reason. Or it's the shiny object syndrome. That's not the goal. So for us to come in and have these conversations, it all comes down to the business cases.
What are we solving for? What's the organization looking at achieving, or is there a problem that needs to be solved for? And these are just ideas in terms of statements that could be made.
But nine times out of 10, organizations have a thorough understanding of the problems that they've got, the opportunities they see, and the initiatives to solve all those.
So when we have these conversations, all of it comes down to alignment. So to Jason's questions that he jumped in off of the hop there, all of that gets defined in the early stages and aligns to the outcome of the organization. And then we work through a reverse engineering and a workflow to get there.
And the delivery structure around this starts with the organization. It starts with that first statement. And we're beating it into everybody here is, what's the outcome? What's the business case? What are we solving for?
That's the North Star. That's what we all align to, from the execution team all the way into these types of conversations. So when we get into these sessions, it's reverse engineering based off of that outcome. What current state looks like? What's today?
We have workflows that exist today. We have a bunch of probably siloed data. We probably have systems that don't talk to one another. We have a lot of challenges with the organization that everybody knows about.
So we document that. We work through that. We understand what those workflows look like. Why do we have 17 touchpoints on approval processes? Things like that. And who are the 17 touchpoints?
We understand that to give us a good baseline. And from that, we take that away, and we get into the ideation of what it could be. We're all aligned on the North Star. And through that process, that gets us to a state in which we come back to our clients in a very quick turnaround, with a proof of concept, to say yes or no.
I like Quick Notes. Quick Notes are good. Don't waste time. Let's get to it. So the quicker we can say no, the better off we are to move the opportunity forward. And that outcome is more certain because we can check all those boxes off that don't get us there.
So through the proof-of-concepts-- and those are turned around in a matter of weeks-- this is not months-- with the way that evolution has occurred and continues to occur.
We are seeing quicker turnaround times because of the technology that these developers can now use and the teams of delivery service teams can now use.
So all of that gets adjusted quite quickly. And then we move through the process to actually get into, OK, now that we've identified this proof-of-concept, let's build out a plan. Let's roadmap it, and that's in conjunction with our clients.
This isn't a dictatorship. It is a democracy. We work through the process collectively. We put forward best recommendations and thoughts of what we believe the proper and the right way to move this forward, to move the needle. We line that up and get buy-in from the executive, and then move into execution.
And the one point that Rob also mentioned-- I think, often, there's an oversight around it. From a business leader standpoint is the support and maintenance side of things and how truly in-depth it is and the requirements associated to see success at the end of the day.
You need a team around you that can facilitate that and allows for you to continue to scale as you grow. There's some questions I'm sure we're going to get into shortly. But with the evolution and change in the AI hype cycles, it's getting wild.
AI hype cycles used to be nine months. So think of the idea. We're going to do some development. We're going to work the process. We're going to get to a production-based state. We now are seeing that same nine-month cycle, 6.7 days. Yeah, I know, I did the same look on my face.
That is insane. No, it's coming. I know we got questions at the end. Just give me two secs. But when we get into this, how do organizations structure support models and methodology around a constantly evolving and changing environment that we all live in? It's very challenging. And I know I'm over time.
So with that, it is something that we'll jump into more. We can go deeper on it. But it is a very, very key point to ensuring long-term success of an organization's strategy when it comes to anything automated or AI-based.
ROB PARRISH: I'd say security and governance is becoming one of the biggest ones for big enterprise customers.
ROBBIE BUTCHART: Yeah, absolutely.
ROB PARRISH: Highly regulated organizations.
ROBBIE BUTCHART: Yeah, absolutely. So with that--
TIM BAILEY: Thanks, Robbie and Rob. So again, I'm Tim Bailey, partner here at Gowling and IP Technology Practice of Law. So because I'm a lawyer, I have a lot less cool catchphrases like "following the sun."
I'm going to talk a little bit about-- we're going to presume a fact scenario because, again, that's how we're trained as lawyers. We're presented a set of facts. And then we've got to apply our understanding of the law to identify issues, and then come up with some sort of analysis.
The scenario I'm going to talk about is there's an underlying assumption that your use case has been decided. So you are this hypothetical entity in the middle, with the reddish circle around it.
And you've decided that you have a use case that will give you an ROI that supports diving deeper into the pool of AI, even though the pool is only there for 6.7 days.
So the first thing you need to do is-- the first bullet there is just, again, hypothetical. You're a software development company with a financial planning app that you currently deliver to customers. And it helps them, I don't know, hedge bets against tariffs.
With that model, you've got probably a software-as-a-service model, how you engage with your customers. So that's a contractual relationship by which you provide the service, your software-as-a-service.
And because AI is so fantastic and people are interested in, you can tell I'm the lawyer as opposed to the guy who works for Dell, because my animations are a little bit rudimentary. But the interest in AI--
[INTERPOSING VOICES]
ROB PARRISH: Yeah, better.
TIM BAILEY: Man. But AI has caught your attention. You think maybe there's a way we can improve our software-as-a-service that we're offering to our clients. So what we have here is three different relationships that you could consider, with again your hypothetical company being in the middle.
The first relationship-- let's see if the clicker works-- is the one between you and the AI provider. And that may be Blackbook with perhaps Microsoft on the other end, or it may be directly with Dell, or it may be with any of the AI provider platforms that are out there.
The next relationship that's important to consider is your internal relationships. And then, again-- sorry, not again, but a new 0.3 is that you also have to consider the relationship that you have with your customers.
So all of these relationships have their own matrix of risk versus reward that you have to go through. Again, this is very high-level. We don't have all the day to talk about this.
But some really high-level risks that people are thinking about in terms of AI, certainly, our bias and hallucinatory results. So bias is just the concept that however the AI model is trained could be slanted one direction or another, which ultimately will influence the way that it analyzes the inputs, and hence the outputs.
"Hallucinatory" is a great word, especially for those of us that have law degrees. And people may have heard about this, actually, recently in the news. There have lawyers who have stood up in court with AI-based documents. And they have cited to a judge's face case law that does not exist, because AI completely created it. In fact, the same lawyer has been caught twice doing this.
So that's a part of the risk reward matrix in terms of the first relationship with the AI provider. You need to have a good understanding of what bias may be there. And within yourself, you want to be able to identify those hallucinatory results that may or may not exist. So that way, you deliver to your customer isn't tainted with any of this risk.
Second type of risk is a big one-- data breaches. And it goes to a point that Rob made. Are you going to have things? Was the short term was on-prem?
On-prem, you're going to have everything on your own premises. A data breach is something you can control much more easily, as opposed to, if you don't have the resources to do that or the infrastructure, then likely your data is being stored elsewhere and maybe in a different country.
And the rules and regulations, how that country controls data breaches or allows you to respond to data breaches and control access to the data, is entirely different.
Does your end customer care about that? Hmm, maybe-- again, going back to that risk reward matrix, with the first relationship with your customer or the AI provider, and then even yourself, internally.
You also have to assess things like unrestricted or unethical use, which I think our imaginations can run wild with what AI could do. Should it be let loose on society unrestrained?
And then also, is it possible that your use of AI, provided by your AI provider, could actually result in some sort of third-party claim, an allegation by someone coming out of the woodwork and saying your product is actually, at least partially, my product or entirely my product?
So I'm going to go through, with this risk reward matrix as an overlaying theme, just some of the concepts that you want to discuss or think about when you're looking at each of those three relationships.
So again, the first one being between the AI provider and your company. So overarching models of how software, which AI effectively is, can be delivered, again, as the software-as-a-service, or you can go through a license model.
Five years ago, people probably would have made the distinction between these two types of relationships, with SaaS being something you can get over the cloud and a license being someone hands you, maybe 10 years ago, a CD-ROM.
And you load it onto your hard drive and you upload or download onto your hard drive a software that you can run. Those are old, outdated concepts now, because AI itself can be licensed, but you'll never receive a physical copy of anything, right?
You'll receive perhaps a certain number of seats to be able to use the AI platform to do whatever analysis, data input and output analysis, that you want. Furthermore, you can put yourself in a position where perhaps you're able, as a recipient of access to an AI platform, to sublicense that to your customers, that they can do their own licensing.
So within this first relationship, the allocation of risk comes down to some really high-level overarching concepts that are important to understand and to think about proactively.
So the first are liability limits, overall. Anyone that's ever gone into a commercial contract has had this discussion with their solicitor, their lawyer, about, how can I limit my exposure?
Oftentimes the liability the liability limits have to be balanced against what the commercial enterprise is at the time. So just because you're entering into-- again, another hypothetical would be Microsoft, for instance.
Just because that's one of the world's largest companies doesn't mean that they're necessarily going to agree to your attempts at limiting your liability based on their mistakes. And that's a balancing of the overall relationship of leverage in those types of relationships.
The second is the issue of indemnities. So indemnities, in simple languages, a promise from one party to another within a contract that they will cover your butt should someone come after you.
So specific instances where this can come up is an allegation from a third party that you've infringed on one or more aspects of their intellectual property.
The other is, perhaps one of your customers, their data, their sensitive data, that they put into your app, maybe they put in their bank account numbers or something to that effect. And that there's a breach that's occurred somewhere along the line. It's possible that that person could sue you.
And the result of that breach is not from your actual actions or inactivity. It's from the AI provider side. So these are the types of risk you need to assess ahead of time.
And so interestingly enough, I learned just recently from dealing with our firm's in-house AI specialist, a guy named Al, ironically, when you look at Al, it looks like AI.
ROB PARRISH: His name's actually Steve, but you decided to call him [INAUDIBLE]
TIM BAILEY: As far as I know, his parents may have named him Al or Alan. Anyway, in any event, right now, there is an ability for people that use Microsoft's-- I think it's Azure, the AI platform, that provided that you meet a certain number of conditions, such as putting in certain safeguards and restrictions on what Azure can do for you.
Then they'll give you a full indemnity for allegations by a third party that you have infringed a third party's copyright.
Fascinating. I don't know if I've ever seen anything like that prior to learning about it from Al. And it's interesting because, effectively, they are so confident, once those safeguards are in place, that they will not be IP infringement, that they're willing to give you that security.
The other aspects to deal with, of course, are data protection. Will the AI provider has any access to the data that you put in so that they can use it to further train and enhance whatever models or analysis they may be doing to improve their product?
And then, of course, there's the actual, as I mentioned earlier, the location of where the data is. I know Zach and I have talked a few times about this. I'm just going to jump ahead on that question.
There's this great hypothetical, where you've got a relationship with a known AI provider, and you know that they happen to have a server farm located just outside of Oregon. What happens if they sell to a company in Russia, and that company wants to move the data farm to Russia?
There's really no way. I mean, that's an extreme hypothetical. But it's to illustrate the point that there's certainly-- as much as you may have a brilliant lawyer working on your side and they may have a brilliant lawyer working for them, there are some risks that you can't actually address ahead of time.
We can try and mitigate those. But it's very unlikely, for example, that you would have any ability to restrict or impair the ability of that Russian company to move the server farm to, I don't know, outside of Moscow.
Other really key points to keep in mind are ownership-- so ownership of the underlying code. Certainly, an AI provider is going to hold on to that. And as Rob said, at Dell, I think you said, that you guys are using AI to create code based on prompts.
Well, I think it was last week, Meta and Google and a bunch of these other-- and Microsoft all came out with some stats on-- they are all now using somewhere between 20% to 50% of their new code is generated by AI.
ROBBIE BUTCHART: 65% for Dell.
TIM BAILEY: There you go.
ROBBIE BUTCHART: Generating the code that runs in our products.
TIM BAILEY: Just think about that for a second. Computers are writing computer code.
ROB PARRISH: And testing it.
TIM BAILEY: It's amazing to me. So certainly, Dell's not going to give you any rights to their underlying code. But you have to balance what kind of relationship you'd want to go into with Dell. Again, just using Dell as an example, because we're up here.
In terms of your inputs and outputs, perhaps Dell wants to have a say on whether or not they'll have some rights in the output that you may then push down the chain of your commercial reality to your customers and using of the app.
There's also the underlying intellectual property. And I apologize, I didn't say at the outset. IP is acronym for Intellectual Property. It's the only one I've got. Oh, no. sorry, I also have artificial intelligence, which also could be out. And SaaS, oh, OK. OK, there's three acronyms, I apologize.
AUDIENCE: [INAUDIBLE]
TIM BAILEY: That's not an acronym. That's a short form for three names. So there's the underlying intellectual property that the AI provider has created. But then there's the new output.
So how do you get an understanding of, for example, if you're using AI? And a different example could be-- you could be using AI in a drug discovery program.
So who will own the rights to try and secure patent protection for the new use of known compounds or new compounds that didn't exist beforehand, simply because computer-generated computer code has helped you discover a new arrangement of molecules?
And then, of course, there's the issue of service level agreements in terms of what happens if the system goes down. How quickly do they have to respond?
What sort of reporting requirements would Rob have to send you an email in the middle of the night saying, our AI system has gone down, so your app is not going to work for the next, probably, less than 6.7 days. It'll take them less time to fix it than that.
The second relationship to consider is the internal one, the you, which is really you and your employees and you and your contractors. So here, the issue is to really-- you would have to consider in your contracts, your employment agreements, and your contracting agreements what are the permitted uses of the AI.
How can you put in through contract, and likely policies, to bolster contracts, safeguarding confidential, personal, and proprietary information or data? What sort of obligation are you going to put onto your employees to ensure that if they do use AI and if it's been approved by, for example, a manager or supervisor and the safeguards are in place, are they obliged to actually give credit to a work product that's been, at least, partially generated by AI?
And have they taken the steps to verify it so they don't end up standing in front of a judge citing case law that never occurred?
There's also the issue of ensuring that there's a clean transfer of rights. So employees and contractors, being individuals or individuals that work for your contractors, you need to ensure that there's a clean transfer of rights from these individuals.
So in terms of rights, intellectual property rights, we're talking about copyright, which is the most common form of protecting software code. But then there's also potential inventions that have occurred. How can you ensure that if your employee is engaged with some AI-- I confuse my own acronyms, and there was only four.
How do you ensure that your employee is transferring the rights they have as an individual to your company, regardless of AI being involved? Well, that comes down to having clear terms in your employment agreements, clear terms with your contractors, and then policies that are published and well-known and have obligations in their contracts for each person to adhere to those policies.
And then, of course, IT security obligations, ensuring that your employees aren't keeping a whiteboard on their fridge in their home. And their home is also used as a daycare for kids, where people are coming in and out all the time. And they can see where you've got your latest iteration of your password written down so that anyone can have access to your system, and therefore go either upstream into the first relationship with your AI provider or downstream into the relationship with your customer, which brings us to our last slide, the third relationship. And that is the end of my dizzying animations, I promise.
The contracts that are specific for you to deliver and for your customer to receive services, whether they're with an AI component as part of it.
So again, terms of use can relate to, if you have the ability to sublicense the AI, then how many seats can you provide to your customers? What restrictions can you provide, or are you obligated based on it flowing from the AI provider?
So for example, restrictions on acting as a service bureau. I don't know where this term comes from, but it effectively means that your customer can't just add their customers on and keep Daisy chaining new people on and they charge you.
Your services that you provide to your customer is the end of the ability for someone to provide new services to a new company effectively. They're your customer, and they're the end customer.
Again, the concept of liability limits is critical to understand in terms of-- and again, this will be a reflection of the commercial reality. Is your customer on a monthly subscription paying 299 a month?
Well, if that's the case, then you might think, well, our liability limits will be quite low. But if the risk that they're exposed to by putting their financial information into your app, it could be higher.
So there's always going to be a relationship between, again, balancing that risk reward matrix. And for your customers that are, at the end of the day, likely signing what we call an end-user license agreement, they're the end user.
They don't have a lot of leverage other than to change the terms you're offering your services. So they'll just either use you or they won't. So you don't want to have terms that are too restrictive, because then no one will sign up and become your customer.
Again, you'll want to reflect the promises in terms of having someone's back. Should one of your customers be sued? Or an allegation is raised by a third party that their use of your app has resulted in them infringing with a third party's rights.
And I think, probably, the model most people would do is they would have a flow-through from that first relationship, from the AI provider through to yourself. And then that would flow through to your customers so that--
Again, if we go back to the example of Microsoft, if your customer is alleged to have infringed the copyright of a third party, then you would want to ensure that you've created the ability to contractually flow through that indemnity you've gotten from Microsoft.
There's also this concept of using your customer's data and feedback for training and improving your app. You can see how a lot of these points mirror the ones that you have with your AI service provider.
And then there's a concept of ownership. Again, who owns what when it gets put into the app? Who owns the underlying IP? And who will own any new IP that may come out of your relationship with your customer?
So a blending of items 4 and 5, you could have a customer feedback form, that your customer says, hey, this would be awesome, except for, at night, I find it hard on my eyes. I have a lot of blue light, so maybe you could have red light on your app. It sounds like a terrible idea. It's off the top of my head, I apologize.
But that feedback could be something that's valuable to you. You want to ensure that your company has the rights through your contract with your customer, that when they sign on, any feedback you provide to us, through the forms we provide you, will be ours to use to improve our systems going forward.
And then, again, mirroring the last point are the service level agreements. And I'm not sure if anyone here has negotiated or participating in writing a service level agreement.
But again, it's probably the most monotonous job I've done as a lawyer, trying to figure out how many decimal points you want to go, from the identification of a point in time when a security breach may have happened to the appropriate responses that have to happen and how much downtime or uptime you will have as part of a promise in the agreement.
If anyone wants to talk more about that, I'll direct you to someone else to talk about it. And I apologize for going quickly. I'm trying to be mindful of everyone's time on a Friday afternoon.
Those, again, are the three primary relationships that as someone who's looking to commercialize AI, from a very high level, these are the relationships you need to have an understanding of the different risk reward matrices within each of those relationships so that, ultimately, you can end up having a product that hopefully seamlessly integrates AI from Rob's company, perhaps using Blackbook as a facilitator through to providing an app to your customer so that everyone can walk away with more reward and less risk.
ROB PARRISH: So just there's one point I'd probably add, that we're seeing a lot more now, is Dell. So the contractual side of software versus how people are using it is coming up a lot with Dell, because we're a large organization, versus companies that want to use software providers that can't sign up to the level of indemnity in some of these contracts that would quite frankly bankrupt them, particularly some of the large regulated organizations.
So the other piece, from a legal aspect, we call it reasoning, with an AI proving exactly how AI has come up with an answer. I don't know if this is the right legal term, Tim, but proving beyond reasonable doubt in a court of law that this is exactly how AI came up with this answer.
So if something happens in law, can you prove all the way back to why it came up with that answer based on this data? So you'll see a lot now in AI where it'll come up with an answer for you, and then there'll be a one or a two, and it clicks to a link as to where it got the data.
Sometimes the highly regulated organizations need to be able to prove why did you come up with that answer AI.
TIM BAILEY: So it's not beyond a reasonable doubt. That's the criminal standard for evidence. There you go.
ROB PARRISH: But you're close. That's why you're here.
TIM BAILEY: It's more of a 50/50, called the balance of probabilities in Canada. But anyways, thanks, Rob. That's a good point.
ROB PARRISH: I knew there'd be a better word.
TIM BAILEY: Yeah.
AUDIENCE: Sorry, I didn't mean to hijack. In your reasoning models [INAUDIBLE] just published a paper, where they did a smart observation of their reasoning.
So [INAUDIBLE] can actually make a reasonably explicit reasoning that [INAUDIBLE] makes this explicit, leading up to the answer. It isn't the actual reasoning used.
The model was actually [INAUDIBLE] observation. And the reasoning that the model would be explicit was how it actually got the answer. It was just lying to satisfy user expectations.
ROB PARRISH: Well, that's the point. So most of the time, when someone does the inspection, they'll find that problem in the audit trail of the reason. So you'll get a lot of software providers now that will show you, all the way down to the code, that the platform was writing to go and come up with the answer. So there's always a way of following the breadcrumbs.
So as long as you're working with an organization that can help, make sure you can prove the breadcrumbs all the way back to original source of data. And this was the data that used.
This is the reasoning it went through to get to this resulting data that it's come up with. And then, therefore, why this may be wrong.
ZAFFAR JAFFER: Anne.
AUDIENCE: Just one thing. The online-- just catching the questions when they're in the room. So if the panel could just repeat the question, that would be great. I do have a question from somebody in the comments.
John is wondering about the business case versus customer acceptance, your comments on customer acceptance in terms of, how much do we have to tell them, them being customers?
ROB PARRISH: And did that come up during Rob, Robbie-- Robbie's, OK.
ROBBIE BUTCHART: Say it one more time. Yeah.
AUDIENCE: How much do we plan? And he is couching it in the business context of a call center function. How much do we have to tell them?
ROBBIE BUTCHART: Them being the client. So I guess the question comes down from my-- I'm going to answer a question with a question. With that, I guess more context is required for me to be able to properly answer that, in order for me to truly understand what are we talking about.
Is it that when the client calls in, is it when they're on a chatbot? Is it when they're-- what is the, I guess, the example use case there? I would position back to John to get more depth and clarity because I think when that situation arises, we need to identify those things in the prework and the discovery work that's done upfront to identify/engage with like a Gowling and say, this is what the initiative is within the organization.
How do we handle this from a legal standpoint? Do we need to handle this from a legal standpoint to have clarity around all of that so that we build it in a way that there is certainty around the outcome of the delivery? That's how I would handle that without having more depth in the question. But that's at a high level, and I'm happy to have a chat with John afterwards, too.
ROB PARRISH: The other way, if you look at it from a Dell point of view, if we've got one of our products or our solutions, how much do we have to tell the client around how we've built our solution that we're going to deliver for them, which is our IP.
So sometimes we're not going to tell you how we built it because you just go and copy it. That could be then copyright. Someone could go and repeat it and copy it, versus if we're committing to an SLA or an outcome, Do you care how we built our solution, as long as that solution is delivering the outcome that--
TIM BAILEY: Yeah.
ROB PARRISH: Versus not telling them what we're doing, and then that affects the client. That's probably more of a, is that ethical?
ROBBIE BUTCHART: And we're talking about a product that exists off the shelf and something that we're creating. So it's two different trains of thought around that.
AUDIENCE: I have a bit of a variation on that question. So last week, there was discussion on whether someone or an organization might want to volunteer that AI was being used versus when it is not. When you're using your clients, how are you seeing them approach that issue?
And the concern in this past discussion was, if you don't tell them, then they'll find out that AI is being used in certain circumstances. Again, it's [INAUDIBLE]. There might be a breach of trust or an undermining of trust from the customer that might affect the business to apply. What do you see people inclined to do--
ROB PARRISH: I think most of it now is always erring on the side of truth. If you're going to be using a customer service agent, for example, if you you're with-- I'm not going to mention any names, in case I get into a copyright infringement of mentioning of his name.
If you have a customer service agent for your telco company and somebody wants to call in because I've got a problem with my line to my house and you're making it like the customer service agent is an agent, not a chatbot, that's the point where you've got to make it clear that this is an agent.
You say, hey, I'm Rob from XYZ. That makes that client feel like I'm talking to an actual agent. And if that's a chatbot, that's going to either from a PR perspective, damage a brand. Or two, if they feel like they're talking to someone, is it further than that? More to a legal point.
So I'd always err on the side of, I don't think we're in a world now, where you don't need to tell them. If I'm going to be going online and I'm dealing with a chatbot, but that chatbot is really accurate and giving me the answer I want, I'm like, great, good for you company for having a chatbot. This has been awesome.
If the chatbot is useless and just put me through to an agent, then that's where the technology needs to get fixed. Or don't use it because you're not actually getting a return on investment. You're actually having brand damage.
ROBBIE BUTCHART: Yeah, agreed. I got nothing to add to that. I agree with that.
ZAFFAR JAFFER: So some questions that we wanted to run through. One for you, Rob. I mean, we still know a lot of businesses are hesitant to adopt AI. So in your experience, what are some of the common misconceptions or barriers that you've encountered. And more importantly, maybe for this audience and those listening virtually, how can leaders help overcome those challenges?
ROB PARRISH: Yeah, that's a good question. So I think one of the biggest challenges that we've seen is that AI for the enterprise or commercial grade is as easy as consumer-grade AI.
But it's down to your data using AI. And we don't just want to put that in the cloud because now, is AI in the cloud going to train against our data? So the next version of these models are actually being able to get more efficient because they're trained on our data, too, because we put it up there.
But also, it's the complexity of delivering a solution with all the things that we've talked about from a legal aspect. So we, as Dell, do it with a lot of our big customers, like the heavily regulated ones, like financial services, customers, health care.
Now, we work closely with AHS, their exec sponsor. And they can't do things, unless all the I's and the T's are crossed from a legal perspective and a security and a risk perspective.
It's not just like we can go and use ChatGPT. That's a public consumer-grade product. They then have to put the security guardrails around to say, how are we using it and how we're using it with our data? That's the bit that's not easy.
So I think the biggest misconception is everyone's using AI, using it for the enterprise or for regulated. It's hard. And that's why we're providing a lot of some--
Even from a legal perspective, we work with a lot of software companies, where we will actually indemnify some of those software providers on Dell paper. We have master service agreements, so we often have to do the legal pieces on our paper so that those customers can use software from other providers.
ZAFFAR JAFFER: Excellent. So Robbie, one for you. You mentioned Canada lagging behind the adoption use of AI. So looking forward three to five years, what role do you believe AI will play in helping shape the future of Canadian industry? And how do you think companies can position themselves to lead rather than follow, from a Canadian perspective?
ROBBIE BUTCHART: I think with where we're at right now, with the evolution and the hypergrowth that we're seeing in the space, I don't think anybody knows what three to five years looks like. I really don't. And I think if anybody says they do, they're full of it.
It's changing so quick. So everything that we're thinking three to five years-- it's a complete guess at this point in time. What we're certain on, what we know today, is that it's not slowing down, despite everybody saying we need to slow it down. It hasn't. It's continuing to ramp up. It's speeding up. The evolution is wild to be a part of, and it's exciting to watch.
So three to five years-- I really can't guess it. Like I said, I don't think anybody can. I think for us, as a city, a province, a country, and wherever you are in this country that's applicable to this, I think there's a mindset shift that is coming. And we're starting to see it already, where the risk aversion that Canadians have to these types of technologies is starting to shift.
We're seeing more willingness and openness to try it. And in order for us to evolve and catch up to the rest of the world from an adoption standpoint, the requirement for us to walk in with some of these technologies that don't exist now are going to require vulnerability. They're going to require risk and tolerance to it.
So when we walk in and have a product that exists off the shelf, like something that's tried and true, we know it's going to work. That's what we're typically used to seeing.
And with these changes that are coming now with the technology that's prevalent, that's not a guarantee. So we're starting to see people have a willingness to say, OK, let's subset small group within the organization. Let's build a use case for it. Let's get a couple of users. Let's validate a yes or a no quickly. And then we start to scale it through the business.
And we iterate and we go. Iterate and go. And that's starting to be an evolution that we're seeing and a willingness for business leaders to really get us to that point.
And without that mindset, I don't see us catching up. So it's nice to see. And that's what I'd say is, get on the train now. Like it's moving. We got to be a part of it.
ZAFFAR JAFFER: We talk about how the adoption continues to accelerate. And Andrew, you and I have talked about this, too. AI will give you a lot of responses. One response it won't give you is, I don't know.
So, Tim, from your perspective, how do you ensure or how are you helping companies ensure responsible AI use, both ethically and from a data governance perspective? I know you touched on that. But how do you work through that?
TIM BAILEY: Again, I think it just comes back to the overarching concept of the risk reward matrix that you have. Whether you're the AI provider, which of those one, two, or three relationships you're in?
That really defines, likely, your risk tolerance, which is going to be informed by how much money you're spending or potentially going to make. And then from there, it's understanding what can you control and what can't you control.
So like the hypothetical I gave was the company moving from the US to Russia. You have very little ability to prevent that from happening. So you've got to just work that into how you balance the risk of that happening versus the reward you may receive until that happens.
And then the risk is it goes to a country where perhaps your data isn't secure. And I just choose Russia as an example. Just to be clear, not trying to allege anything nefarious going on in Russia with respect to data protection.
ROB PARRISH: I would say there's a piece on that, too, the same way. You can build that domain expertise all the way through the workflow. So if you're working with someone like a Blackbook, you could have a lawyer present for the domain expertise around how protected are we through this workflow. You could have someone that might be in--
It's an HR digital assistant. What exactly are you trying to do from an HR perspective? And they're involved in the loop. I think, typically, it's always been technologists going to build some technology, and then business owners try and use it.
Now, those two are really coming together. You need the domain expertise and the technologists working hand in hand. If you have them separate, that's where you've got some degree of risk in between, where those two haven't been lined up.
ROBBIE BUTCHART: And I'll add to that the depth that's required in the discovery work that's done upfront, in that prework that I was talking about, that's where all this is identified.
Not only the domain expertise within the legal space or the hardware space, it's integrations, it's its API requirements, its existing team members, whatever that looks like. It all needs to be part of that so that there's no gaps within the flow. And that allows for you to have certainty on the outcome at the end of the day.
TIM BAILEY: And so the only point I would add is maybe additive, maybe not. But again, this concept of starting at the first principle of having your use case decided-- as Robbie mentioned, having that from an executive level, a top-down process, so that you have a direction. I think you used North Star.
And from there, we have to accept the fact that we are moving away from, again, getting a CD-ROM of Microsoft Office and loading it into your computer. That is not the world we live in anymore. It is continuously changing 6.7 days for something to turn around as a new iteration.
And so how does someone manage the various risks that can come up from that? You just have to be on the train. You have to have bought the ticket for the right car. And you have to make sure that you're making sure when your stop comes up in terms of you're paying attention to what's happening, I think.
And these guys mentioned, it's important to make sure that you're working with whoever's giving you strategic advice on this so that you can see when-- my railway analogy has run out here.
But make sure there isn't a cow on the tracks ahead of you. And if there is, what are you going to do about it?
I think, again, there's no one answer. It's just you have to be proactive and reactive at the same time.
ZAFFAR JAFFER: That's a good way to finish. So we'll gear up for our time this afternoon. Thank you for everyone, who joined us virtually and hope you found that discussion engaging. Feel free to reach out to any of us if you wish to continue the conversation or have any questions. Have a good rest of the day.
TIM BAILEY: Thank you.
AUDIENCE: Thanks, guys.
ROBBIE BUTCHART: Thanks all.
ROB PARRISH: Thank you.
A thought-provoking seminar on the commercialisation of artificial intelligence where you will have the opportunity to hear from business leaders and legal professionals on how AI technologies are reshaping industries and creating new market opportunities.
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.