Data Protection and Artificial Intelligence in the UK

15 July 2019

The use of robotics and Artificial Intelligence (AI) has been a matter of discussion by the European Parliament and the European Commission for the last four years, and part of the Digital Single Market Strategy. One of its consequences has been the creation of a high-level group of experts, whose functions include advising the Commission on the ethics involved in the use of AI systems.



The General Data Protection Regulation (GDPR) requirements and the path to ethical AI

In April 2019, the European Commission released its Communication Building Trust in Human-Centric Artificial Intelligence. In this Communication, the Commission made clear that AI should be a tool aimed to serve people and increase human well-being, an aim which requires ensuring the trustworthiness of AI and alignment with EU values and human rights.

It is noted that AI brings new challenges, since machines are able to learn and make automated decisions. There is a risk that some decisions are taken from non-reliable sources of data, causing harm or problematic outcomes. This is a concern due to the increasing implementation of AI in goods and services that people use daily, including smartphones, online applications and automated cars. Hence, the European Commission has stressed the importance of ensuring that applications integrating AI components are not only compliant with the law, but also follow an ethical journey.

The High-Level Expert Group on Artificial Intelligence set up by the European Commission ("AI HLEG": a group of 52 experts from academia, civil society and industry appointed by the Commission in 2018) also published Ethics Guidelines for Trustworthy AI in April 2019 (the Guidelines), following the release of its draft in December 2018, on which more than 500 opinions were considered. This is, again, part of the AI strategy adopted by the Commission.

The Guidelines aim to promote Trustworthy AI, which has three components: it should be lawful; ethical; and robust. The Guidelines focus on the latter two components and set out a list of fundamental rights, ethical principles, requirements and assessments that should be applied to AI systems.

Fundamental rights

According to the Guidelines, the relevant fundamental rights that should be considered in any case when testing, developing and deploying AI systems are:

  • respect for human dignity, to avoid treating humans as objects that are manipulated or conditioned;
  • freedom of the individual, so that individuals are able to take decisions by themselves;
  • respect for democracy, justice and the rule of law, to ensure that AI systems do not operate in a way that destabilises democratic processes;
  • equality, non-discrimination and solidarity, to mitigate any risks of applications which use AI components taking actions leading to unfair goals; and
  • safeguarding citizens' rights.

Ethical principles and the seven requirements:

The AI HLEG considers these four ethical principles as "ethical imperatives" which AI developers should observe, in light of the fact that they are based on the fundamental rights that might be the most impacted by the use of AI tools, namely:

  • respect for human autonomy;
  • prevention of harm;
  • fairness; and
  • explicability.

The above principles have inspired the seven requirements (a non-exhaustive list) that, ultimately, AI practitioners should meet by carrying out the assessments set out in the document, and by evaluating them on a regular basis during the AI system's life cycle. These requirements are:

  • Human agency and oversight: AI systems should (i) respect humans' fundamental rights (meaning that developers should carry out fundamental rights impact assessments), and (ii) allow humans to make informed decisions when interacting with the AI system and to guarantee a reasonable level of human control over the application. From a data protection point of view, this requirement enhances data subjects' right not to be subject to a decision based solely on automated processing (including profiling) if such processing will lead to a decision which produces legal effects or has a significant impact on the data subject, unless an exemption applies (Article 22 of the GDPR).
  • Technical robustness and safety: AI developers should ensure the resilience and security of the systems deployed. Where personal data is processed, this becomes a mandatory requirement placed on both data controllers and data processors under Article 32 of the GDPR. The aim is to ensure that unintentional harm is avoided, or the risk of this happening is minimised by the undertaking of regular risk assessments. In addition, the AI HLEG includes methods such as the evaluation and verification of behavioural patterns, implementation of fall-back plans, and assessment of the accuracy of the data and reliability of the actions taken by the AI system.
  • Privacy and data governance: Going beyond the general obligations set out in the data protection and privacy laws (e.g. the GDPR Article 25 obligation to privacy by design and by default, and the six data protection principles set out in Article 5 of the GDPR), AI developers should put in place mechanisms to ensure the quality and integrity of data and legitimate access to it.
  • Transparency Transparency is crucial in a trustworthy AI environment, and it represents one of the major challenges to developers due to a margin of uncertainty over the behaviour of the AI system, in which the system might create new personal data without human intervention, and, to some extent, their knowledge. Traceability mechanisms are essential to ensure that transparency is achieved, so that AI systems and their decisions are explained in a manner that is compliant with Articles 13 to 15 of the GDPR, by providing regular and meaningful information about the logic involved and the consequences for humans using the AI system.
  • Diversity, non-discrimination and fairness: To avoid discrimination, AI systems practitioners should establish a strategy to understand the meaning of fairness applied to the AI system, and to ensure that unfair biases are flagged and avoided. Regarding diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life cycle.
  • Societal and environmental well-being: To meet this requirement, AI systems should be sustainable and environmentally friendly, and ensure a positive social impact on humans directly interacting with the AI system and on any other indirectly affected stakeholders.
  • Accountability: This is an essential requirement to comply with the data protection principles and becomes even more relevant when AI systems use personal data. This implies the implementation of mechanisms, such as auditing the system's processes and outcomes, the overseeing of ethics applied, documentation of updates, evaluations and any decisions taken by organisations, and implementation of mechanisms allowing redress if any harm or adverse impact is caused.

Supplemental legislation in the UK

In the UK, section 14 of the Data Protection Act 2018 (Chapter 2, Part 2 of the Act) has further legislated the GDPR Article 22 limitations on the use of automated processing and profiling which causes legal effects concerning individuals, or which significantly affects them.>

Article 22 of the GDPR states that such processing significantly affecting individuals will not take place unless:

  • the individual affected gives explicit consent;
  • it is necessary to enter into or perform a contract between the individual and a data controller; or
  • it is authorised by law (in this case, the Data Protection Act 2018) which lays down suitable safeguards.

These limitations are stronger when special categories of data are involved in the automated processing significantly affecting the individual; and, such processing is only allowed when the person concerned gives explicit consent, or if it is necessary to protect the vital interest of a person who is not able to provide consent at the moment the processing takes place.

If a data controller in the UK concludes that it has legal grounds to carry out automated processing or profiling on the basis set out above, then according to the Data Protection Act 2018, it must implement the additional measures set out in section 14 of the Act, namely:

  • notifying the individual in writing that a decision has been taken based solely on automated processing; and
  • putting in place an internal policy to deal with individuals' requests to reconsider the decision or involve human intervention on the automated processing-based decision. According to the Data Protection Act 2018, the data subject should exercise any of these requests within a month of receiving the data controllers' notification, and the data controller should respond according to the timescales and rules set out in Article 12(3) of the GDPR. The response must be in writing and provide information regarding the steps taken to comply with the request, as well as the outcome.

These additional safeguards and obligations are in line with the European ethical principles and requirements mentioned above.

The Information Commissioner's Office (ICO) approach on AI and its regulatory "Sandbox" (beta phase)

In the UK, the Information Commissioner has taken a similar approach and AI is in her list of priorities.

A consequence of this approach was the update on the "Big data, artificial intelligence, machine learning and data protection" guidance in 2017 (in view of the GDPR and the UK Data Protection Act 2018 coming into force). In this document, the ICO stressed the importance of ensuring fair, accurate and non-discriminatory use of personal data, and set out rules to ensure an ethical approach (an approach that was later confirmed by the European Commission, as mentioned above).

This guidance is a useful tool due to the fact that the ICO sets out its views on how to comply with the data protection principles of fairness and lawful processing, purpose limitation, data minimisation and retention, accuracy, integrity and confidentiality. It also provides relevant input on how to inform of unforeseen purposes, anonymise data, ensure privacy by design, and includes checklists that help organisations to carry out data protection impact assessments focused on projects.

Another consequence of the ICO caring about the use of innovative tools is the implementation of a regulatory Sandbox, which offers a service to support organisations that are using personal data to develop innovative products. It is therefore expected that a considerable number of AI systems practitioners join the Sandbox; although initially, it seems that the number of organisations admitted in this beta sample will be in the range of 10 organisations.

The Sandbox is currently in its beta phase, in which participants will assess (supported by the ICO's officers) the manner in which they use personal data and the paths to follow in order to ensure compliance with the data protection legislation.

This text first appeared in the UK chapter of Global Legal Insights - AI, Machine Learning & Big Data 2019, published by Global Legal Group, Ltd. Follow the link to the full chapter covering GDPR requirements and the path to ethical AI.


NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.