The sprawling nature of the regulatory landscape for artificial intelligence (AI) is a core issue for stakeholders as they push for more international and cross-sectoral harmonisation of rules.
Stakeholders in the AI space told UK regulators that the regulatory landscape for AI is “complex and fragmented” and asked for more coordination and alignment between regulators, a feedback statement published last week (October 26) reveals.
The submissions came in response to a joint discussion paper prepared by the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA), which asked the public to share their views on the risks and benefits of the use of AI in financial services.
It also asked stakeholders to weigh in on whether the technology can be managed through updates to the existing regulatory framework or if a new approach is needed.
The discussion comes as recent developments, particularly with regard to generative AI, have drawn regulators’ attention to the potential risks of the new technology, while they try to ensure its safe and responsible development.
According to government figures, the UK’s AI sector ranks third behind the US and China, contributing £3.7bn to the economy and employing 50,000 people.
Extensive regulation for AI
Respondents, which included a wide variety of entities from banks and industry bodies to technology providers, financial market infrastructures and consumer associations, have told regulators that AI in financial services is already subject to many legal requirements and guidance.
“The suite of regulations governing the use of AI in the financial sector is already extensive, so care is needed to avoid creating unnecessary new requirements,” according to one of the respondents.
For example, the BoE, FCA and PRA have each issued several statements related to operational resilience and outsourcing that are relevant for AI, while existing requirements of discrimination laws, intellectual property law, contract law and forms of ethical guidance also apply.
Data protection laws, such as the UK General Data Protection Regulation (GDPR), also apply to AI use, although stakeholders have raised that there may be some difficulties in “the way UK GDPR interacts with AI”, while others noted that the lack of understanding by suppliers may lead to some businesses “potentially gaming or ignoring the rules”.
“Given these complexities, the industry is right to call for a joined-up approach to managing and mitigating AI risks,” according to Pedro Bizarro, chief science officer at Feedzai.
This is especially the case in financial services, “where positive consumer outcomes, with respect to fairness and protection, are vital”, Bizarro added.
Stakeholders, therefore, emphasised the importance of cross-sectoral and cross-jurisdictional coordination, pointing out that “AI is a cross-cutting technology extending across sectoral boundaries”.
“Since many regulated firms operate in multiple jurisdictions, an internationally coordinated and harmonised regulatory response is critical in ensuring that UK regulation does not disadvantage UK firms and markets while also minimising fragmentation and operational complexity,” respondents said.
UK sets up world’s first AI Safety Institute
Ahead of this week’s global summit focusing on AI safety, on Thursday (October 26), UK Prime Minister Rishi Sunak also announced the launch of the world’s first AI Safety Institute.
The institute is aimed at exploring the risks of AI, from social harms such as bias, misinformation, fraud or cyber-attacks, through to other risks such as the use of terrorist groups of AI to spread fear or build chemical or biological weapons.
“Right now, the only people testing the safety of AI are the very organisations developing it,” Sunak said when announcing the institute.
As many of these firms have the incentives to compete and be the first to build the best models, “we should not rely on them marking their own homework”, Sunak added.
The Prime Minister added that he hopes world leaders in the AI space would come to an agreement at the summit regarding the nature of the risks and can start a global conversation similar to what members of the Intergovernmental Panel on Climate Change did.
Bizarro said he agrees with Sunak's observation about potential bias in technology giants’ self-regulation and it is “imperative” that these regulatory dialogues involve stronger participation from small and medium-sized enterprises, start-ups, research institutions, academia, open-source groups and representatives of clients or users of those models.
“While the launch of the world's first AI Safety Institute is commendable, a comprehensive understanding of the capabilities of new AI models and what guardrails or desired criteria are needed can only be achieved if it embraces the entire tech community, rather than solely focusing on major Silicon Valley players”, Bizarro told Vixio.
Bizarro stressed that AI could operate “as a two-sided coin, enabling new risks, but also enabling new, likely greater benefits, capable of enabling misuse, but also capable of thwarting misuse”.
“Within the realm of financial crime, AI is playing an expanding role in bolstering security and providing invaluable insights across the financial services industry.
“Moreover, AI's potential extends to identifying the misuse of generative AI, particularly concerning deep fakes, which remains a prominent concern for many.”