For those interested in some lockdown listening on Artificial Intelligence (AI), we recommend an interview with Stuart Russell broadcast on the UK's BBC Radio Four.
Russell explains the critical differences between current AI and "artificial general intelligence". Whereas current AI can only achieve narrow tasks, artificial general intelligence might tackle any problem. Unfortunately, it might alight on single-minded, damaging solutions - or, at least, solutions that humans would wish to avoid. Russell gives the example of removing carbon dioxide from the atmosphere by turning the oceans to acid. He invokes the fable of King Midas' single-minded and disastrous desire for gold. He suggests that we need to limit these risks, in the engineering of AI, by making artificial general intelligence deferential to the views of humans.
Of particular interest to lawyers, Russell discusses the roles of regulation and standards. On the premise that technology companies are "allergic" to regulation, he suggests standards would be a more welcome and effective solution. He suggests that, by analogy, bridges do not fall down because engineering standards avoid this, not because it is a legal requirement.
From our experience of technical standards, we think the dichotomy between standards and regulation may not hold up.
- Companies need a reason to adopt standards instead of pursing their own solutions. Commonly, standards are agreed by companies to ensure interoperability of equipment and software; for example, companies voluntarily comply with telecommunications standards such as 3G, 4G and 5G so that a mobile phone made by one manufacturer will communicate with a base station made by another. Without such a commercial imperative, we may need to fall back to legal requirements - either a direct legal requirement to comply with a standard or the influence of indirect risks (of product liability, tort, etc.) which would be mitigated by adopting an industry standard.
- Many companies welcome regulation to "level the playing field". This is particularly true where there is conflict between the common good and commercial aims. For example, a company's board may wish to adopt costly environmental measures but these would risk making a company uncompetitive - unless regulation requires their competitors to do the same. The desire for levelling regulation can be seen in the recent consultations by the Law Commissions in England, Wales and Scotland on autonomous vehicles. The summary of the consultation reported that 95% of respondents favoured a national scheme of basis safety standards for autonomous public transport services to "ensure a 'level playing field' for developers" - indeed some respondents highlighted the benefits of an international scheme. Where some degree of regulation is inevitable for a new technology, such as autonomous vehicles, it is best that the regulatory landscape is sketched out early to avoid companies developing their technology at risk. No company wishes to invest in developing and launching a product only for a regulator to later prescribe rules which either renders it illegal or necessitates expensive changes, particularly where a competitor might be closer to the regulatory requirements and gain a competitive windfall.
- A legal-based regulatory framework set and policed by an independent regulator may be an important element in building public trust. Companies may benefit from being able to point to a regulatory framework both as a selling point and also as a defence when things go wrong. Trust is predicted to be critical to many AI applications, such as self-driving cars, autonomous vehicles, and AI in healthcare. The European Commission has published ethics guidelines for "trustworthy" AI and essential ingredients of trust (transparency, explanability, accountability, etc.) will be required by numerous regulatory frameworks being developed around the world.
- Russell notes that he does not have a solution to the deliberate misuse of artificial general intelligence. To that, we can add the accidental misuse. Again, legal solutions (though imperfect) are the foundation for enforcing proper use.
In any case, work is ongoing on regulation in many countries. Some of these expressly consider the risks to humanity. Draft text prepared for the European Parliament on autonomous robots recited the risk to the human species presented by AI: "whereas ultimately there is a possibility that within the space of a few decades AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species". As passed, the resolution only recited that "AI could surpass human intellectual capacity". It did however keep a proposal to include kill switches in AI: it annexed a licence for designers of autonomous robots that would require, for example, designers to "integrate obvious opt-out mechanisms (kill switches) that should be consistent with reasonable design objectives". The real challenge for regulation is to be fast and flexible enough to ensure safety while not stifling innovation: hence the importance of industry consultations.
The potential for technical standards relating to AI is particularly interesting from the perspective of intellectual property. We have seen some work on standardizing definitions and levels of AI, especially for automotive applications, and some discussion of standardising the use of sensors for autonomous vehicles. The adoption of standards more broadly in AI may see a role for "standard essential patents". These have been a significant commercial aspect of previous technical standards, such as in telecommunications and audio, image and video compression, and are a particular expertise of our Intellectual Property (IP) team.