UK’s ‘collaborative’ approach to AI regulation may prove complex and burdensome

2022-07-21 16:46:23
关注

The UK government today outlined its proposed approach to legislation for regulating artificial intelligence (AI). Unlike the EU, which is developing a new AI law, the UK’s proposed approach would ask existing regulators to apply the principles of AI governance to their respective areas of focus.

A ‘collaborative’ system of AI regulation is the preferred approach among UK regulators, according to a recent study by the Alan Turing Institute, but could be complex to deliver and may overburden their resources, experts told Tech Monitor.

The national AI strategy will take a decentralised approach, putting control in the hands of existing regulators. (Photo courtesy of DCMS)

In a policy paper published today, the Department for Digital, Culture, Media and Sport outlined an approach to AI regulation that it describes as ‘context-specific’, ‘pro-innovation and risk-based’, ‘coherent’ and ‘proportionate and adaptable’.

Under the proposals, existing agencies such as Ofcom and the Competition Markets Authority would be responsible for ensuring any AI used by industry, academia or the public sector within their areas of interest is technically secure, functions as designed, is “explainable”, and considers fairness.

Related Articles

Governance

UK finance bill includes first crypto asset and stablecoin regulation

Governance

Will new UK data laws put adequacy agreement with EU at risk?

Governance

New UK government CDO will have data sharing at the top of his agenda

Governance

Uber used ‘kill switch’ to stop authorities accessing data, leaks reveal

Regulators would have to follow core principles around AI, rather than each individual use being regulated and controlled, the policy paper says. They would apply these principles to their respective sectors and build on them with specific guidelines and regulations. Some sectors, such as healthcare and finance will have stricter rules, whereas others will be more relaxed and voluntary.

These cross-sector principles include regulating AI based on its use and the impact it will have on individuals, groups and businesses. It also has to be pro-innovation and risk-based in its regulation, focusing on addressing issues where there is clear evidence of real risk or missed opportunities. And regulation should be tailored to the distinct characteristics of AI, ensuring the overall regulations are easy to understand and follow.

AI regulation in the UK: a collaborative approach

The government’s proposed approach stands in contrast to that of the EU, whose AI Act seeks to establish a new law governing the use of AI across the bloc. “The EU is adopting a risk-based approach,” says Adam Leon Smith, CTO of AI agency Dragonfly and the UK representative in the EU’s AI standards group. “It is specifically prohibiting certain types of AI, and requiring high-risk use cases to be subject to independent conformity assessment. 

“The UK is also following a context-specific and risk-based approach, but is not trying to define that approach in primary legislation, instead, it is leaving that to individual regulators.”

Content from our partners

How AI will extend the scale and sophistication of cybercrime

Why balancing security with IT operations demands a holistic approach

The zero day vulnerability trade remains lucrative but risky

A more collaborative approach, in which regulators work together to define principles but apply them separately in their areas of focus, is the preferred approach among regulators, according to a recent study by AI think tank the Alan Turing Institute.

Regulators consulted in the study rejected the prospect of a single AI regulator, said Dr Cosmina Dorobantu, co-director of the public policy programme at The Alan Turing Institute. “Everybody shot that down because it would affect the independence of the regulators,” she explained.

The prospect of a purely voluntary system of AI regulation was also rejected. “AI is a very broad technology,” said Professor Helen Margetts, programme director for public policy at the institute. “Regulation has to be a collaborative effort.”

Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters

Nevertheless, the government’s proposed approach is likely to be a complex undertaking, given the number of regulatory agencies in the UK. “One of the more surprising things we learned during the study is that there is no list of regulators,” said Dr Dorobantu. “Nobody keeps a central database. There are over 100, ranging from some with thousands of employees to others with just one person.”

All these regulators will need to develop AI expertise under this proposed approach, the pair explained, and how they should coordinate their activity when regulations overlap will need to be clarified.

The government’s proposed approach could also prove burdensome for the regulators, argues Leon-Smith. “It is unclear if the ICO and Ofcom will be able to handle the increased workload,” he said. “This workload is particularly important given the frequency of change that AI systems undergo, but also the expected impact of the Online Safety Bill on Ofcom.”

The UK’s proposed approach includes a provision that would require all high-risk AI applications to be “explainable”, particularly with respect to bias and potential inaccuracies. This goes further than the EU’s AI Act, Leon-Smith observes. 

“The policy paper states that regulators may deem that high-risk decisions that cannot be explained should be prohibited entirely. The EU has not gone so far, merely indicating that information about the operation of the systems should be available to users.”

The government has invited interested parties to provide feedback on the policy paper. It will set out more details of the proposed regulatory framework in a forthcoming whitepaper, it said.

Read more: MEPs are preparing to debate Europe’s AI Act. These are the most contentious issues.

Topics in this article: AI

参考译文
英国对人工智能监管的“合作”方式可能会被证明是复杂和繁重的
英国政府今天提出了其关于人工智能(AI)监管立法的建议方案。与正在制定全新AI法律的欧盟不同,英国的建议方案要求现有监管机构将AI治理的原则应用于各自关注的领域。据人工智能研究所(Alan Turing Institute)最近的一项研究显示,监管机构更倾向于采用“协作式”的AI监管方式,但专家告诉《Tech Monitor》称,这种方式在实施时可能复杂且会给监管资源带来负担。国家AI战略将采取去中心化的方法,把控制权交由现有监管机构。(照片由DCMS提供) 在今天发布的一份政策文件中,数字、文化、媒体和体育部提出了一种AI监管方法,该方法被描述为“基于情境”、“鼓励创新并基于风险”、“协调一致”以及“适度且可调整”。根据提议,现有机构如通信管理局(Ofcom)和竞争与市场管理局(CMA)将负责确保在其监管范围内,由行业、学术界或公共部门使用的AI技术安全、按设计运行、具有“可解释性”,并考虑到公平问题。 相关文章 治理:英国金融法案首次纳入加密资产和稳定币监管 治理:新的英国数据法律是否会危及与欧盟的充分性协议? 治理:新任英国政府首席数据官将把数据共享作为首要任务 治理:泄露显示Uber使用“关闭开关”阻止当局访问数据 政策文件指出,监管机构将遵循关于AI的核心原则,而不是对每一个具体用途进行监管和控制。他们将把这些原则适用于各自领域,并根据实际情况制定具体指南和法规。一些领域,如医疗和金融,将有更严格的规定,而其他领域则将更加宽松和自愿。这些跨领域的原则包括根据AI的用途及其对个人、群体和企业的影响进行监管。同时,监管应鼓励创新并基于风险,重点解决已有明确证据表明存在实际风险或错失机会的问题。此外,监管应根据AI的独特特征量身定制,确保整体法规易于理解和遵守。 英国的AI监管:协作式方法 政府提出的方案与欧盟的方案形成鲜明对比。欧盟的AI法案旨在在整个地区建立一项全新的AI监管法律。 “欧盟正在采用基于风险的方法,”AI机构Dragonfly的首席技术官Adam Leon Smith以及欧盟AI标准工作组的英国代表表示。“它明确禁止某些类型的AI,并要求对高风险的应用进行独立合规评估。” “英国也在采用基于情境和风险的方法,但并没有试图在主要立法中定义这种方法,而是将定义权交由各个监管机构。” 来自我们的合作伙伴的内容 为什么AI将扩大网络犯罪的规模和复杂性? 为什么在安全与IT运维之间取得平衡需要整体方法? 零日漏洞交易依旧有利可图但风险巨大 一项由人工智能智库Alan Turing Institute进行的最新研究表明,监管机构更倾向于采用协作式方法,即监管机构共同制定原则,然后在各自关注领域分别实施。 研究所中参与咨询的监管机构反对设立一个单一的AI监管机构。研究所公共政策项目联合负责人Cosmina Dorobantu表示:“所有人都否决了这个提议,因为这会影响监管机构的独立性。” 纯粹自愿的AI监管体系也被否决了。研究所公共政策项目主任Helen Margetts教授表示:“AI是一项非常广泛的技术,监管必须是一项协作努力。” 数据、洞察和分析将发送至您的邮箱 查看所有新闻通讯 由Tech Monitor团队提供 订阅我们的新闻通讯 在此处注册 尽管如此,鉴于英国拥有众多监管机构,政府提出的方案很可能是一项复杂的任务。Dorobantu博士表示:“我们在研究中发现的一个令人惊讶的事实是,目前并没有一份监管机构的清单。”“没人维护一个中央数据库。目前有超过100个监管机构,从拥有数千名员工的机构到只有一名员工的机构。” 他们解释说,所有这些监管机构都将在该方案下需要发展AI专业知识,而当监管出现重叠时,如何协调他们的行动也需要明确。 Leon-Smith认为,政府提出的方案也可能对监管机构造成负担。“目前尚不清楚信息专员办公室(ICO)和通信管理局(Ofcom)是否能够处理增加的工作量。”他表示。“这种工作量尤其重要,因为AI系统频繁变化,再加上《在线安全法案》对Ofcom的影响。” 英国提出的方案包括一项规定,要求所有高风险AI应用必须“可解释”,尤其是在涉及偏见和潜在不准确性方面。Leon-Smith指出,这比欧盟的AI法案走得更远。“政策文件中提到,监管机构可能判定无法解释的高风险决策应被完全禁止。而欧盟并未如此激进,仅表示系统运行的信息应对用户开放。” 政府已邀请有关各方对这份政策文件提供反馈意见。它表示将在即将发布的白皮书中进一步说明拟议的监管框架。 阅读更多:欧洲议会正在准备就欧盟AI法案进行辩论。这些是最具争议性的问题。 本文主题:人工智能(AI)
您觉得本篇内容如何

1人已评

评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘