AI, machine learning, deep learning, neural networks - call it what you like, there's a lot of excitement about the ability of software to analyse a lot of data, spot patterns, learn (sometimes independently) and to make conclusions and produce insights that are entirely new.

It is not unusual in the world of new technologies for this level of excitement and chatter. What is new though, is the political and legislative interest in this technology, which distinguishes it and its potential to re-shape our world.

The need for more, better technology in healthcare has been on the government's agenda for the last 37 years, since a 1982 report by the Koerner Steering Group. Health and Social Care Secretary Matt Hancock has brought renewed focus with his 'The Future of Healthcare' document, which will enable access to real-time data and set standards for security and interoperability. My observation is that the technology needed in healthcare is not the clever, cutting-edge stuff. It's the basic ability to access a patient's record electronically, send emails rather than faxes, book appointments, automate rotas and surgery schedules and locate equipment digitally within a hospital. Investing in the basics would bring efficiencies, savings and better patient experience. However, there is also a place for the cutting-edge of technology which is capable of transformative change, rather than incremental efficiency.

Following the demise of the National Programme for IT, the NHS (or rather all of its constituent parts) has been very cautious when approaching any kind of technological solutions. Once burnt, twice shy. So there is already a huge hurdle to clear to reassure healthcare professionals that any new take-up of software and solutions will not repeat their previous experience. That challenge is twice as big if you then factor in technology that is taking on some of the healthcare professional's work. Not because doctors fear being replaced, but because how do they know if they can trust it?

Trust and politics

That word, trust, sums up the most significant barrier to the adoption of AI in a healthcare setting. Patients do not know if they can trust new software, when no-one can explain in lay terms how it works, to give a diagnosis, monitor their condition or interpret scans. Doctors do not know which of the many applications out there have been properly coded or calibrated, with physician input and based on accurate data.

Both regulators and politicians are acutely aware of the trust issue.

The UK's Industrial Strategy names AI and data as one of the four 'Grand Challenges' which will transform our future and which the government wants the UK to lead on globally. The House of Lords created a Select Committee on Artificial Intelligence to investigate whether the UK was ready to adopt AI technologies and to identify if there were any barriers. One of their comments in the paper published in April 2018 'AI in the UK: ready, willing and able?' is:

"We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society. Whether this takes the form of technical transparency, explainability, or indeed both, will depend on context and the stakes involved but in both cases we believe explainability will be a more useful approach for the citizen and the consumers."

They also recommended that the government should look to create a new Centre for Data Ethics and Innovation, a plan already in place as part of the Sector Deal for AI in the Industrial Strategy. This body is to act as a custodian of data, including personal data, to make it more accessible to organisations within a set of criteria and parameters so that its access and use is ethical. The new Centre is due to open in Spring 2019 and will be a key factor in achieving transparency and education of the public about how data is accessed, shared and used. Read more about the new Centre here.

Trust and data protection

Europe's new General Data Protection Regulation 2016 ("GDPR") already has transparency and accountability at its core. The regulators identified that if people do not trust organisations with their data, they will not want to share it or allow access to it. This stifles innovation and prevents organisations delivering better, more tailored services - such as medicines which have been personalised. GDPR aims to increase transparency to foster trust so that data is properly protected and handled through a range of measures:

  • more detailed and more prominent privacy notices in which organisations have to provide more information than previously about what data they collect, why and how it is processed. This presents an obvious challenge for the providers of AI systems: how do you explain in layman's terms, and concisely, how a very complex piece of software works? How do you let people know how their data may be used when the AI could produce insights which were not the ones that were expected? How do you tell people what data you hold about them when the AI could produce new data?
  • more rights for individuals to control how organisations use their data, such as the right to have data suspended, deleted and ported. These accompany the existing rights of access to data, correction of data and withdrawal of consent to processing of data.
  • data protection impact assessments. These are risk assessment tools which are now mandatory where organisations will use new technology in a way that is likely to cause a high risk to the rights and freedoms of individuals, particularly where profiling individuals or dealing with health-related data at scale. It requires an organisation to analyse the proposed technology and determine how it will be GDPR-compliant.
  • The principles of data protection by design and by default. These require organisations (and suppliers of systems and software) to ensure that privacy is protected as a default in any system and that compliance with legislation is designed in. This is to avoid retrofitting privacy measures into workflows and software and to make privacy a design requirement rather than an afterthought.
  • Physical and technical security measures. Always a core data protection principle is to ensure that security is appropriate to the sensitivity of the data and the harm that could be caused through mis-use.

GDPR also anticipates particular types of processing which AI will use, known as "profiling" which means automated processing of data to evaluate, analyse or predict aspects of the individual, including their health. This type of analysis of a collection of factors about a patient to assist with diagnosis will be used by some AI in a healthcare context. Profiling is permitted, provided that it is properly explained in privacy notices including the 'logic' involved and the significance and consequences of the profiling. Clearly then AI in a black box will not comply with GDPR as well as appearing unfriendly to healthcare professionals who want to be able to follow the machine's logic and check the result it has provided. However, if as a result of profiling an automatic decision is made which will have significant effects on an individual, which would certainly include a decision on a diagnosis or whether or not to treat them, then the profiling can only be carried out if either the individual has consented, or the processing is necessary for the purpose of a contract between the patient and data subject, or is permitted by law. Organisations therefore need to carefully analyse whether their use of AI amounts to 'profiling' and whether there is an 'automatic' decision made as a result. Where a human reviews results of an AI's analysis or recommendation, this is not an 'automatic' decision but it will of course be important for the human to be able to decide whether or not they agree with the AI.

Regulators never want to stand in the way of innovation and progress, especially when there are tangible benefits to individual citizens and to the economy. Their approach to data protection is to create a framework of principles which organisations must apply to their particular technology, circumstances and risks. GDPR is designed to accommodate new technologies such as AI, but it is only part of the broader regime that needs to be developed to ensure that AI is used ethically and, more fundamentally, is trusted by all who use it.