News -

News reaction: UK Government publishes AI Whitepaper

This week the UK Government has published its first Artificial Intelligence (AI) whitepaper which provides an approach to regulating AI to drive responsible innovation and maintain public trust through five key principles.

In many ways the UK is a leader when it comes to AI, but there is work to be done to further secure the vast benefits of AI, while also addressing the range of challenges and risks it presents.

As part of their national AI strategy, the UK Government committed to establishing a pro innovation approach to regulating AI. The National AI Strategy policy paper, published in September 2021, presented emerging proposals for regulating AI, and was the driving force behind the latest AI whitepaper.

The five key principles that existing regulators, such as the Health and Safety Executive and Competition and Markets Authority are being asked to consider to best facilitate the safe and innovative use of AI in the industries they monitor are:

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

The Government is now consulting on its new regulatory framework. In them meantime, regulators will be preparing for the new guidelines by working with organisations on how to implement these five key principles into their sectors over the next 12 months, with the potential for further legislation to ensure regulators consider the principles consistently.

Chris Anley, Chief Scientist at NCC Group, said: “We welcome the focus on a principles-based approach to regulation - one that balances innovation with the need to build safe, secure and trustworthy AI systems. Like any new technology, AI brings opportunities and risks. We are still in the process of discovering what those opportunities and risks really are.”

"There are unique security challenges associated with AI systems that need specific attention. Data is at the heart of AI, and that data is often sensitive; financial, medical or personal in some way. The use of AI systems can expose personal data to third-party suppliers, sensitive training data can leak from the deployed systems, and there's potential for serious harm resulting from insufficiently robust AI systems."

"A risk-based approach to regulation allows businesses the freedom to innovate, while ensuring the public is protected from the potential dangers. We welcome the Government's positive engagement on these complex issues and look forward to responding to its White Paper consultation in due course."

Topics

  • Technology, general

Categories

  • increasing regulatory & legislative requirements
  • cyber security

Contacts

NCC Group Press Office

Press contact All media enquires relating to NCC Group plc +44 7721577574

NCC Group - Financial Media Enquiries

Press contact Maitland AMO Financial Results Media Enquiries +44 (0)20 7379 5151

Regional Press Office - North America

Press contact +1 408 776 1400

Regional Press Office - Europe

Press contact +31 20 794 4737