Cloud

UK government’s AI strategy to rely on existing regulations instead of new laws


The UK government has today published a white paper outlining its plans to regulate general purpose artificial intelligence.

The paper, published by the newly formed Department for Science, Innovation and Technology (DSIT), sets out guidelines for what it calls “responsible use” and outlines five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

However, in order to “avoid heavy-handed legislation which could stifle innovation”, the government has opted not to give responsibility for AI governance to a new single regulator, instead calling on existing regulators such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority to come up with their own approaches that best suit the way AI is being used in their sectors.

Consequently, in the absence of any new laws, these sectoral regulators will have to rely on existing powers.

Outlining its next steps, the government said that over the next 12 months, regulators will issue practical guidance to organizations, setting out how to implement these principles and handing out risk assessment templates. It added that legislation could also be formally introduced to ensure regulators consider the principles consistently.

While the government says the approach outlined in its white paper will mean the UK’s rules can adapt as this “fast-moving technology” develops, others are more sceptical.

Giulia Gentile, a fellow at the London School of Economics Law School whose research focuses on digital society and AI regulation, wrote on Twitter that existing frameworks may not be able to effectively regulate AI due to the complex and multi-layered nature of some AI tools, meaning conflation between different regimes will be inevitable.

“Differently from other technologies, AI exacerbates vulnerabilities. At the same time, it is in hands of a few companies shaping the ways in which this technology works,” she wrote. “As a result, the absence of an AI framework has the likely potential to create a more unequal and unjust society, enhancing the market and power asymmetries between those who dominate digital tools and those who are impacted by AI technology.”

According to the government, the UK’s AI industry is currently thriving, employing over 50,000 people and contributing £3.7 billion to the economy last year.

In his budget earlier this month, Chancellor Jeremey Hunt announced a new AI research award which will offer £1 million per year to the company that has achieved the “most groundbreaking British AI research.”

This was in addition to an AI sandbox to help innovators get cutting edge products to market and a promise to work with the Intellectual Property Office to provide clarity on IP rules so Generative AI companies can access the material they need.

Given the scale of the industry—Britain is home to twice as many companies providing AI products and services as any other European country—Dr Gentile said that ultimately, she found the government’s approach underwhelming, especially given the latest developments in AI.

“The impression is that the UK Government is allowing innovation to triumph as a value in itself without considering the broader disruptive implications for the society,” she wrote.

Copyright © 2023 IDG Communications, Inc.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.