Alexandra Brodie
Partner
Co-Chair of Global Tech
Article
5
The National Cyber Security Centre (NCSC) has announced new international guidelines for secure Artificial Intelligence (AI) system development. These provide a framework for developers of any systems using machine learning AI to help them make informed cyber security decisions at each stage of the development process.
Breaking new ground in terms of international collaboration in this area, the guidelines are published by the UK's National Cyber Security Centre (NCSC), the US's Cybersecurity and Infrastructure Security Agency (CISA), and 21 other international agencies. The 20 page document was also written in cooperation with 19 organisations including Google, Amazon, OpenAI, Microsoft and the Alan Turing Institute.
The guidelines are aimed at providers of AI systems, whether created from scratch or built on top of tools, and services provided by others. But the NCSC urges all stakeholders, including data scientists, developers, managers, decision-makers and risk owners to read and use the guidelines.
The document notes that when the pace of development is high, security can often be a secondary consideration. It insists however that for machine learning AI, security must be a core requirement throughout the lifecycle of the system. This is because on top of existing cyber security threats, AI systems are subject to new types of vulnerabilities. 'Adversarial machine learning' (AML) exploits fundamental vulnerabilities in machine learning components including hardware, software, workflows, and supply chains.
The guidelines are structured around four key areas within the AI system development process: (1) secure design, (2) secure development, (3) secure deployment, and (4) secure operation and maintenance. For each area, the guidelines suggest considerations and mitigations to reduce the risks, such as only releasing models, applications or systems after subjecting them to appropriate and effective security evaluation.
The new guidelines must be considered in tandem with established cyber security, risk management and incident response best practice. In particular, providers must continue to follow the 'secure by design' principles developed by the NCSC. For more on this, see our Data Unlocked: Data protection and cyber security article.
As with established 'secure by design' principles, these new global guidelines require developers to invest in prioritising features, mechanisms, and implementation of tools that protect customers at each layer of the system design and across all stages of the development life cycle. Doing this will prevent a need to re-design at further expense, as well as safeguard customers and their data from the outset.
The recent Bletchley Declaration by the countries attending the AI Safety Summit acknowledged that many risks arising from AI are inherently international in nature and so are best addressed through international cooperation. The publication of the guidelines, swiftly following the summit, is a significant step forward in the advancement of cyber risk awareness and mitigation strategies for AI system development. AI providers should ensure they familiarise themselves with the guidelines and implement them as appropriate. However, the guidelines are just that, and are not in themselves binding. Therefore, whilst compliance is still clearly key from a reputational perspective, AI providers will not face liability for breaching the guidelines unless they have given or give contractual commitments to customers to comply with them.
A number of countries and the EU are also separately creating new laws in relation to AI, with progress being made at varying degrees. Providers of AI systems will still face a patchwork of potentially applicable regulation, laws and guidance.
Our Tech team which includes cyber, AI, IP, regulatory, data and privacy specialists is here to support you navigate this ever-changing web of considerations so please get in touch with Amber Strickland, Jocelyn Paulley, Matt Hervey, Alex Brodie, Kieran Laird if you want to find out more.
NOT LEGAL ADVICE. Information made available on this website in any form is for information purposes only. It is not, and should not be taken as, legal advice. You should not rely on, or take or fail to take any action based upon this information. Never disregard professional legal advice or delay in seeking legal advice because of something you have read on this website. Gowling WLG professionals will be pleased to discuss resolutions to specific legal concerns you may have.