Neil Hendron
Partner
Webinaires sur demande
37
Neil: Hello and welcome to this session on IP Law and Strategy for Managing Artificial Intelligence: A European Perspective. Before we begin there are some initial housekeeping slides to attend to. The first point to note is that this session is being streamed live but the recording will be made available after the event, both through the Gowling WLG website and also the Silicon Valley Association of General Counsel, All Hands virtual meeting platform. For all registrants seeking California CLE credits please visit the link page in the chat function which should be there now and complete off the form to be filled in. That will take you through to. Please do note that MCLE credit for viewing the recording will be available ... through the All Hands meeting CLE servers who do go through the platform rather than directly through the Gowling website, if you're looking for MCLE credit. First things first in terms of legal disclaimer. It wouldn't be a presentation from a law firm if there wasn't a legal disclaimer. Today's presentation is not intended as legal advice. It's a high level overview. It's impossible to cover all relevant details and of the subject matter. Available rights and remedies will depend on the unique facts of each situation, your typical contract or sub-contract and the nature of your project. Of course we would be very happy to assist with anything specific advice. Do contact your qualified legal counsel before making any decisions or taking any actions. This is of particular importance. Every Province and Territory has its own regime. As you know the situation is extremely fluid and changing on a daily basis as things evolve. Your best point of action could also evolve. Please follow up to date reliable sources for your information. So, by way of introduction by name is Neil Hendron. I'm a partner in the UK Corporate team of Gowling WLG. For those joining as part of the SVAGC All Hands virtual meeting, perhaps you're familiar with Gowling. We're an international law firm. We do not have a US presence but we work regularly with US companies and law firms in relation to international matters. We have over 1,400 legal professionals working across 19 cities from Vancouver to Beijing. Our dual centers of gravity being Canada and the UK and both myself and Matt Hervey, who's today's presenter, are based in our UK offices in London. Having previously attended the SVAGC All Hands in person I am, and more broadly we at Gowling, are delighted to be able to participate virtually in this year's meeting and I do hope to be able to see many of you again soon when the conference is next able to convene in person. So my role in this session is just to introduce Matt Hervey and to moderate Q&A at the end of the session. The format of the session will be for Matt to deliver his presentation for approximately the next 45 minutes or so and we'll then have time for some Q&A through the Q&A function through the Zoom platform.
To introduce Matt properly, he is the head of our Artificial Intelligence team in the UK. He's also a partner in our Intellectual Property team. Matt is a general editor of the Law of Artificial Intelligence which is a practitioner's handbook published in December last year by Sweet & Maxwell. The book analyzes specific legal issues inherent in AI technology, high current UK civil law and criminal law applies to AI, the principles of ethical AI, emerging regulatory measures and the use of AI in law firms and in the administration of justice. Matt co-wrote the chapter on AI and intellectual property within that book as well as being general editor. He advises companies of all sizes on their intellectual property strategy for artificial intelligence. So without further delay I will pass over to Matt to take us through the presentation. Thanks, Matt.
Matt: Thank you very much, Neil. That's always nice to be introduced by you. Thank you. Let me just outline the scope of the session. I'll put the agenda slide up so you can see it at the same time. So, I'll make some introductory remarks about AI in Europe and there's a range of legal issues for AI. Then I'm going to focus in on the IP rights, the classic IP rights and you'll notice I don't count trade secrets as an IP right. That's just a categorization issue but it seems to be popular and how classic ones apply to AI. Then, spoiler alert, because really classical IP does not apply well to AI I'm going to talk a bit about trade secrets and about contractual measures because they are really important from a European perspective for protecting AI. Then I'm going to talk a little bit about the unusual risk of disclosure that AI, in particular, presents and really to give some guidance from, again, a European perspective on the key considerations when choosing whether to pursue a monopoly right by patent or to rely on trade secrets. Now, normally when I talk about AI I give an overview of the long history of AI research and I contrast expert systems and machine learning and I explain some of the fundamental issues with it and it's potential as a tool. But I think when I'm talking to a Silicon Valley audience that is wholly redundant and since you lead the world in AI I imagine you are very familiar with the technology and its promise. So instead I'm going to outline the importance of AI in Europe and in the UK.
Many European countries have centers of excellence for AI in Europe and its also got a very active private sector. So the UK has the Turing Institute, a center of excellence for AI research, which in fact combines the five leading universities in the UK in that field, and investment in AI in the UK has been estimated to be the third highest in the world, obviously after the United States. This infographic, which is from 2019, shows a sort of medium turn trend that AI investment in Europe has been by far the highest in the UK. Indeed, on this chart, the UK investment couldn't even be shown on the same scale as the rest of Europe. The UK has some very high profile AI companies which have been sold and these include Magic Pony, which was sold to Twitter for 150 million dollars, SwiftKey sold to Microsoft for 250 million dollars and DeepMind sold to Google for half a billion. AI is also a major focus of European governments. It is one of the four focuses of the UK's industrial strategy and it's also aligned with the UK government's procurement, particularly the UK National Health Service, the NHS, is very carefully devising procurement rules for AI and that's focused through a team called NHSX. In the EU itself, many of the member States have specific AI strategies. So it's not just the UK but also there's an overarching strategy for the EU itself. That includes investment in AI and creating the conditions for a data based economy, generally, and that includes measures such as open data sets from government and from quasi-government, robust protection of data privacy and data portability for consumers moving between suppliers. As part of this the European Union is openly keen to regulate AI and Ursula von der Leyen, the current President, actually promised to regulate AI in general, not on the sector level but in general in her first 100 days. There's an open aim in the EU of achieving clear rules to attract inward investment. The idea that if you have clear written rules then people know where they stand and they will do business in the EU. But it's also to export European values, and I think they want to build on the success of the GDPR, which was in effect exported to other countries and to do the same to achieve ethical AI, according to European values, internationally. Now those programs have been delayed by COVID but the current timeline the EU has published is to have a legislative proposal for AI in this quarter of the year. But already lots of work has been done as they have published guidelines on ethical AI and individual roadmaps and studies have been produced by the regulators of various sectors. So recent examples are the aviation sector and also for the use of AI in the justice system in Europe.
I want to talk a bit about my book. Since we seem to be at an influxion point where we are coalescing regulation and new laws and it seemed like a good time to start pulling together what we hope is only a first edition. Actually received my copy just before Christmas. Here it is, and it's surprising to think that we proposed it to the publisher we had to identify two laws as it applied to AI, but it seems we found something to speak about. It covers UK law and harmonized EU law. I just put up there the sort of chapter subject so you can see the range of the practice areas it touches on. My own expertise is IP but I think by being the general editor of the book I've really learnt the importance of a holistic view when it comes to dealing with AI and that's really the sort of technological belt movements in the market, cross currents between practice areas and the emerging regulation and, of course, the impact of ethics. I want to illustrate those cross currents when I talk about IP strategy.
So turning to IP, the key themes are that EU law and much of UK IP law was never designed for AI. It's patchy and it's really happenstance how the laws tend to combine and how we are interpreted. But I'll set through the key IP rights one by one. Then as I said I'll turn to trade secrets and contractual protections, and really then talk about the decision to be made between a monopoly right and trade secrets, and really disclosure risk is one of the key points there and then I'm going to talk a bit broadly about in Europe what those disclaimers and risks are. Now, my first point is the context of most of IP law as it was written and really the fundamentals of patent eligibility for European patents that were developed in the 1960's and I've illustrated a typical 1960's computer. We are looking at mainframe computers, punch cards and some early and very basic graphics and really no internet at all so ... net hadn't even been launched. AI was an area of research at the time but it was focused on symbolic approaches and which human programmers attempted to replicate human logic in code. So really, this fundamental law was set 50 years before the current advances and this sort of step change in machine learning over the last decade. Those have of course enabled by the technology to a greater computer ... memory but also the availability of extremely large data sets in part from the internet and e-commerce. Now one outlier to that general trend is copyright in the UK. So the UK's Copyright Design and Patent Act in 1988 allowed for computer generated works. That covered literally artistic, musical and dramatic works and also unregistered designs. But it was such an outlier within Europe that the approach was rejected by the EU as premature when it was drafting its first directive on the copyright in software in 1991, and it was still not implemented as a concept in the revised directive on software 18 years later in 2009. So except for that one example in UK copyright law, UK law and harmonized EU law on IP was never designed to address two of the current developments. One is importance of data and, secondly, the emergence of machines able to produce human like outputs including literary works, for the purposes of copyright, and inventions for the purposes of patents.
Now going to step through three parts of the AI assets of the inputs and the technology and its outputs. Talk about the inputs first and I'm illustrating with that some wonderfully creative training data, pictures of blueberry muffins and Chihuahuas, which is obviously incredibly important for a visual classifier, and I want to talk about data. First one has to consider the terminology because data can mean mere information, pure data, mere data, depending on what you want to qualify. But actually I think the term data is often used to mean the source of material which you're training so it could, in fact, be in the form of a copyright work, a photograph or an academic article and so you may have protection for your data to the extent it's in that form. But information, per se, even the information within that copyright work, is not protected by copyright in the UK or Europe. The UK, of course, has explained this in the terms that copyright exists only in the literary form in which information was addressed, and really that reflects with the rest of international practice. So both TRIPS and the WIPO copyright treaty provide that copyright protection extends to expressions and not to ideas, procedures, methods of operation and mathematical concepts as such. Now that doesn't call out data but it's clear what is protected is the expression, to the exclusion of other things, such as information. Moreover, even where you are extracting information from the copyright work, and that does typically involve copying, the UK and the EU provide some exceptions to infringement for the purposes of data mining.
Since 2014 the UK has had an exception to copyright infringement but that was limited to non-commercial purposes. So if an academic was mining copyright works they were allowed to do so. Now the EU, in 2019, adopted a wider exception of the copyright directive, and that applies to commercial purposes, but it is subject to an expressed and suitably communicated reservation of rights by the owner. Now how that's going to be done remains to be seen because they're really thinking about information you might be scraping from the internet and where should that reservation of rights happen. It could be in a ... text file or on the page itself but what happens when the image appears elsewhere? How do you keep expressing that reservation of rights? Moreover, there is an absolute exception for non-commercial purposes but it's wider than the UK's exception, because it also applies to cultural heritage institutes and research organizations and it appears the definition of the latter might cover for example, collaborations between commercial companies, maybe life sciences companies, generating pooled data resources. Now since the UK has so far declined to implement the wider exceptions that the EU is carrying on with, and post Brexit there is no requirement to implement such a law, we may see within Europe, and indeed more widely, differences in jurisdiction as to their allowance of data mining. Which may create favourable areas to data mining because of the local exceptions, and conversely, companies that have valuable sources of potential data to limit access to jurisdictions where these exceptions do not apply. Information, per se, is also not protected by the EU Sui Generis database rights, despite its name and the promise it seems to hold, and the protection of databases expressly excludes the works, data or materials themselves within the database. Now what you hold is a valuable training set if actually someone needs to copy all of it, or much of it, to make use of it. That might be a moot distinction and effectively the database right is what you need. But actually for many years the database rights have been considered of very little practical use.
I think most significant for a US company is, of course, there are no similar rights outside the EU on which to establish a reciprocal recognition of them. Also, very early on in the history of the database rights in 2004 a series of cases cut it right down in its scope. That is because the definition requires that there's been investment in the obtaining, verifying or presenting of the contents of the database. They held that where a company incurs the costs of generating that data in the first place, that is not the right kind of investment to award them with a database right. This has been come to be known as the spin off doctrine. If the data has spun off another business activity you can't get database rights for it. That is obviously a huge blade to maybe, again, a life sciences company which is generating vast amounts of clinical trial data, but it can't protect that as a database right because it would have incurred those costs anyway. However, I do think this is shorthand because that's all it is. It's a sort of summary of the case law. It's not actually what was said exactly in the judgments even though it's widely believed. I think it is worthy of re-examination because when it comes to ingesting data into AI for training purposes there is in fact a lot of investment, potentially, in processing that data, in checking for bias, in creating synthetic additions to it and the like which might be the right kind of investment to allow you to attract database rights after all.
Next, I'm going to talk about the protection in Europe for AI techniques and technology, and patents are clearly the most visible sign of interest in protecting AI, because it requires applications to pursue a registered right. So people can count them up and figure out that there's been, for example, about 800%25 growth in the last 10 years and applications relating to AI according to the UK IPO's analysis and also WIPO and other patent offices around the world. This growth, in fact, disguises I think considerable problems with patenting in this space. There is a wealth of cases on protecting AI by patents and what I've illustrated there is the first specific guidance from the EPO, from 2018, which has subsequently been revised. Other specific guidance on the protection of computer invented inventions because they recognize the increased demand and that really people need to educate themselves more on the viability of patents. This is a huge field and I can't hope to summarize it on a talk like this. But I would make a few brief high level points that it is in fact relatively difficult to get patents in this space, despite the number of applications. First, many AI techniques are old. The fundamentals are very old. Some of them may go back to the 1940's and beyond and even things like machine learning have been around, sort of fundamental concepts, have been around even before our computing parent enabled it to reach its full potential. The second is that they face exclusions in the patent systems in all countries, particularly mathematical methods and computer programs as such, for the purposes of European patents. Even if a patent is secured there is doubt whether it will continue to be held to be valid. It matters because there's always the risk of a shift in case law just as the US experienced with Alice. So there's always a risk of unknown unknowns, where the law changes, but there's also a current considerable problem already identified and that is widespread uncertainty over the requirement for sufficiency. So that is the amount of disclosure required to enable the skilled person, when reading the patent, to be able to work the invention. That is subject of ongoing study by various IP groups and by patent offices. But the case law may crystalize at some point and we may find that some of the earlier granted patents in AI are subsequently found to be invalid. The other protection for the technology is software, and that is suitable for a platform someone has written to allow third parties to develop AI. Where someone has written a suite of code and you have your software and that protects you against someone copying the actual code you've written or translating it to port it between platforms, between ... systems. But it's very important to realize this only applies to software when written by a human and that is by definition, that's harmonized across EU law, and the harmonized EU law also expressly excludes protection for ideas and principles which underline any element of a computer program. So copyright cannot be used to actually protect the actual technology. That is what a patent is for.
Now I'm going to move to outputs, the AI outputs. I'm illustrating that with two examples. One on the left, a series of what appear to be art works, and in fact what were assessed by members of the public to be art but were in fact generated by an AI, and on the right, some poetry generated by AI. But obviously the outputs of AI can be immensely valuable. Such as a new drug targets or a new effective molecule. Something which could be monetized in the millions or billions. So machine learning can output patents and data, classifications of inputs and it can output, objectively at least, things which resemble art, music and inventions. But there are many challenges to IP protection for the outputs of AI in Europe. In common with the US both the EPO and the UK IPO received applications in relation to the invention claims to be made by Davis. So Davis, as many of you will know, was an AI invented by Stephen Taylor and he said in his submissions to these patent offices that the AI, alone without any meaningful human intervention, had come up with two inventions. One essentially for a container and the other for a light flashed in a certain way. He is pursuing these applications in multiple jurisdictions and we've had first instance judgments from the UK IPO, from the EPO and from the US PTO. In the UK it has also gone on appeal to the UK high court and that has also given its decision. It is a typical effect of the fact that IP law was written at the time when such inventions were not considered, and really, it's no surprise that it's not patentable where your invention has been created by an AI. The judgment has decided that a human inventor is required and that is for a mix of administrative reasons, such as forms which require the name of an individual, but also too fundamentals. One, it was found in the UK that our Patents Act was written in such a way that the inventor is construed to be human, and secondly, that an AI, because it lacks a legal entity, cannot own the rights to the application and cannot pass them to the applicant. So on multiple threads it was impossible to patent an invention by an AI. The impact of this is not yet entirely clear because what we have is a real debate in the community about whether this is actually realistic. Whether AIs will in fact begin to generate inventions without meaningful human interaction and it's important to remember, and this is often misreported, that none of the patent offices nor the high court made any finding for the Davis AI, in fact, made the invention. All of the judgments made on the assumption that it was the inventor. In my travels talking to clients and the conferences I see a range of opinions as to whether this is possible or likely in the near future. I think it's of particular significance for life sciences companies where under conventional doctrine, at least in Europe, you have your invention when you have a plausibility that your new drug will be effective for a new indication, or a known drug for a new indication, and thereafter the expensive and time consuming clinical trials don't actually contribute to the inventive step. So, if an AI were to identify such a suitable candidate and to make it plausible, arguably it is a great invention at that point and you may then not be able to get IP protection downstream.
Now the practical effect of that is questionable because I think there's a grey area and there is, as yet, no case law to distinguish the boundary line between AI being used as a mere tool, as a human as being and aware it is an autonomous inventor. There's important difference in practice between the US and UK and EPO practice in this regard. There is in fact, in the US, there is general penalties for not naming the correct inventor, and I believe if it has been wrongly named and there is fraud, in the US the patent would be unenforceable. But in Europe there is no effective deterrent to naming a human as an inventor where in fact an AI made the invention.
Now returning to the UK and computer generated copyright works and designs, although we've had this law on our statute books for 30 years, there is in fact only one case on the point. So there is little evidence of it having been used up until now. There was actually a case my firm did and won, but actually since then the law may have changed because of EU harmonization, to actually invalidate the effect of our claim to protect computer generated works. That is because when the software directive in Europe was put into force it harmonized the test for originality for copyright works. It now requires, whereas in the UK it only required that it hadn't been copied from someone else and it required sufficient skill, it now requires people offers own intellectual creation and there are many commentators who believe that cannot be satisfied by an AI. So although the UK law has rules for who would have owned an AI, a copyright generated by an AI, because of the new test originality, that copyright will never arise by definition for someone else to own it.
So that leaves trade secrets as the defacto most likely protection in Europe for AI because it is clearly broad enough to protect information including algorithms and data and, indeed, the largest AI dispute to date, that I'm aware of, the Waymo Uber case in the States ended up focusing on trade secrets. In the UK, and in the EU, by definition trade secret has to have been subject to reasonable steps under the circumstances by the person lawfully and controlling information to keep it secret. So there will be a requirement if you come to court to defend your trade secret to actually prove that you've taken reasonable steps to keep it secret. So that in practice means not only should you take steps but you should take such steps as you can easily log, and record, and turn up to court and prove. So that includes practical measures to keep information secret such as restricting physical access to the data. Also electronic means to restrict access to the data so who has permission to see it, partial protections, access logs and then, again, measures to restrict or to monitor use of that data. So to monitor for transfers out of the company by email or into client storage or memory sticks and all such practical measures should be backed up with staff training and staff handbooks and the like and, of course, legal measures as well. So non-disclosure agreements where you're dealing with employees or contractors or third parties, terms of employment to keep trade secrets protected and those should extend beyond the term of their employment to keep them protected and, of course, enforcing your policies internally.
Now where you have fallen out of that definition because you can't prove you've taken such measures, in the UK hope may still lie in the common law right of confidential information, which doesn't have as rigorous a definition. So it may be still possible to bring an action in the UK, in particular. Another point of note in the UK is while some EU countries are getting their heads around trade secrets, where some have always had it, the UK court has a long track record of being able to hear disputes over secrets, in camera where the public does not learn the secret. I think some European jurisdictions are having to catch up. It's a requirement under trade secrets that that happens but the procedural mechanisms may be not as developed as they are in the UK.
So then I should also talk about contractual protection because that's the other defacto way to protect your AI assets when traditional IP rights fail you. First point is obviously that you should have obligations in contracts to protect trade secrets by requiring people to pursue the same practical measures that you are pursuing. In the UK it is legal to treat assets which aren't in fact eligible for IP as if they did attract IP as between the parties. IP rules should also be clarified in the contract as between parties so if the position of ownership of actual IP and, indeed, the control of things which aren't IP. So really you're wanting to avoid reliance on the default rules in statute. So returning to the UK's computer generated rights legislation, there is a default rule of ownership, which is that a computer generated copyright work belongs to the person by who the arrangement's necessary for the creation of the work undertaken. That's obviously open to interpretation. There's also a total dearth of case law so you'd want to clarify that in your contract, rather than rely on applying the case law. If you think of an example as complicated as a self-driving car you may have rivals for the ownership and/or control of information and technology, including the OEM, the equipment manufacturer, the dealer, the after sell supply, the first owner, subsequent owners, drivers and passengers, so a web of contracts would be ideal to clarify who has control of the valuable data arising from those circumstances.
I'm going to end with talking about the ideal which is a holistic strategy for IP. I think this applies particularly to the choice between patents as a monopoly right where you are protected regardless of disclosure. Indeed it's part of a social contract to get your patent, you actually disclose your invention publicly. So the choice between the patent and trade secrets, which obviously rely on being able to keep your information secret, and so that is often a case of assessing the risk of disclosure. Now that is, I think, common in any patent attorney's practice that when someone comes to them with any potential patent they ask them if they'd rather keep it secret. But I think the question is particularly acute when it comes to AI technology and it is also subject to many, many moving pieces. So I'm just going to outline some of the considerations.
I think you need a holistic view of the technology for market and developing law. So first of all, some machine learning based applications are some of the inputs, some of the outputs, are more or less vulnerable to reverse engineering. The example I've given here is a complex product on the left, a self-driving car, where to reverse engineer it and copy it may be far more challenging than a simple molecule, which I illustrated on the right. So in the life sciences sector, if you are launching a small molecule into the market place, you need some sort of monopoly right in order to stop people immediately cutting into your market and that would be a patent, ideally, at all of the quasi-monopoly right of data exclusivity which can give life sciences companies up to 10 years of protection because no one can get marketing authorization on the back of their data within that time. However, I think what is emerging over the last few years is in fact that reverse engineering can be applied to more complex embodiments of AI to extract training data and the likes. So, example, I hear about is the camera on a self-driving car that enables the car to identify the pedestrian and the cyclist and the like, by having a video stream and producing labels. Now, the risk is that actually if you run that long enough you end up with a whole set of training data which you can then train your new AI on to do the same task. I am told that is a real world problem for some of these manufacturers and that they rely on contractual exclusions that customers cannot reverse engineer a data set in this way. But again, concentrating on Europe, I would issue a warning. The software directive in Europe, which is also still in force in the UK, voids a contractual term that prevents reverse engineering through the normal use of the software. So I think care would need to be used in the drafting of such an exclusion of a product sold for use in Europe.
Now other disclosure risks which are unusual to AI. First of all, AI machine running specialists are relatively likely to move between companies. They are, if I can generalize, smart, techy, maybe millennial, they love a start up and demand outstrips supply. So they can move if they wish. Leavers, even under trade secrets, are going to be free to use experience and skills gained at their employers so you need to be quite clear in your contract to identify trade secrets as such and not merely experience and skills, to try to make sure they can't use it elsewhere. Also, many of our clients need to attract AI specialists with an environment where they're allowed to publish their findings, either as journals or if suitable as a patent, because they do want to publicize what they do there is an inherent disclosure risk there. Also, at least in Europe, AI and machine learning projects typically involve collaborations between maybe four or five parties so you'd have a data supply for the customer, the patent supply for the data scientist and the like. So you've got an increased risk of disclosure, inherently, because of the number of players but also where you're having to tap up the expertise at the macademic, they absolutely want to publish their result so you may need to seek patent before they do so because of that disclosure. Obviously, you want to have contracts with your collaborators to cut down on these disclosures, but they are real.
The next real risk here is regulation and I think it's fair to say that the EU is keener on regulating AI than maybe the US or is at least further on that path. There are many signs that this will end up introducing requirements for disclosure of algorithms or data sets. That would be to prove safety or robustness or fairness and in Europe there's already, within the GDPR, article 22, and that relates to the need to explain to consumers where a significant decision made about them has been done so automatically, and they're potentially to explain how that algorithm has made its decision and the UK's Information Commissioner's Office, that is responsible for privacy in the UK, has issued not one, not two but three volumes of advice on how your AI needs to be explained to consumers.
Finally, there's a lot of talk in Europe about competitional authorities who may wish to enforce the sharing of data out to promote competition. There are, I'm afraid to say, three quick references to eroding the incumbent position of what is called big tech and occasionally mentions of specific US big tech companies in that regard. So on that cherry note I will stop and leave it open for some questions.
Neil: Thanks very much, Matt. That was fascinating. Just opening up the Q&A chat. We've had two while you were speaking so we can start with those and hopefully some more will come through. The first question: are there moves to change IP laws in the UK and the EU?
Matt: Absolutely. So I would say there are consultations at least going on. The vast internationals, the WIPOs, had its third conversation with AI already but the EPOs had a number of conferences, particularly trying to get information from the users of their services, essentially, on what they need. The UK IPO completed a consultation, the round of gathering evidence, in November. Now the European Parliament has published some references to considering some sort of new rights specific to AI but has yet to sketch out what that might be. They even flirted a couple of years ago with a possibility of giving AI some sort of legal rights, which of course might solve the problem of patents, because then it could own the right to the patent application. There's really no word yet from the UK legislature about what it might do about AI but that's really because I think the UK IPO's consultation is still ongoing.
Neil: Thank you. The second question: are regulators conscious of the disclosure risks they present to company trade secrets?
Matt: Certainly IP practitioners are well aware of that and it's come up, certainly on the WIPO consultation, that this is a threat, fundamentally, to trade secrets if regulators want to know more about your algorithm or your data. I think the regulators themselves are currently focused on safety and the fundamentals and haven't really addressed that point particularly. I would say as a general point, at least in Europe, some regulators have been quite transparent about the fact that they just don't have the expertise in-house to develop their regulation. I think getting to those finer points is sort of beyond their scope at the moment. Then the only thing I have seen is some of the ... of Europe, that specialize in their sectors, have expressly mentioned this to the European Parliament as a risk. So watch this space. We'll see what happens.
Neil: Will do. Interesting. Let me give it a minute to see if there's any other questions that come through. I guess while we're waiting that, thank you very much for the presentation on what is a very exciting and quickly developing, to say the least, area of law. That will surely keep you on your toes for many editions of your book to come. I don't think any further questions are coming through. So I think we can bring the presentation to a close at that point. Thank you very much for joining everybody. Matt, closing remarks?
Matt: I would have loved to have heard from the members of their experiences because Silicon Valley is obviously one of the key locales for AI development. Questions or comments would be welcome. I can see something's flashed up. What is that?
Neil: We've had a compliment. Great presentation.
Matt: Okay. Thank you. It was my pleasure.
Matt Hervey, Gowling WLG's UK Head of Artificial Intelligence and co-editor of The Law of Artificial Intelligence (Sweet & Maxwell), will talk about IP law and strategy for AI in the context of evolving commercial, technical, legal and regulatory environments.
CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.