Progress on regulating AI in financial services is slow, perhaps for good reason

2023-07-30
关注

  •  

AI isn’t anything new in the financial services sector. For several years, the technology has been used to support everything from automated stock trading to processing loan applications using credit-scoring algorithms. But AI’s rapid technological and commercial growth, which has enabled companies to process vast quantities of raw data, is a troubling prospect for regulators, especially those, like the Financial Conduct Authority (FCA), that are charged with ensuring honesty and fairness from technology often branded a ‘black box’ due to the opaque way it operates.

AI Robot behind bars
Regulators have been tasked to ensure consumer protection without stunting business innovation. (Image by Shutterstock)

“I think one of the biggest challenges with regulation is the pace at which technology is evolving,” says Klaudija Brami, who works on legal technology at law firm Macfarlanes. “You’ve got this cat-and-mouse situation between technology development and regulation.”

While the EU has attempted to craft an all-encompassing, cross-sectoral set of regulations in the form of the AI Act, the UK is taking a more hands-off, principles-based approach. Individual regulators, like the FCA, are essentially being asked to cultivate responses to the technology on a sector-by-sector basis – a strategy that’s intended to offer a dynamic, pro-innovation environment to help fulfil Rishi Sunak’s pitch to make the UK a global hub of AI regulation.

But there’s still a long way to go. The FCA and the Bank of England issued their latest discussion paper on AI last October, a month before ChatGPT saw the light of day and threw AI into the global limelight. Since then, the promises and risks of AI have only grown more prominent, as the FCA’s chief executive, Nikhil Rathi, recently acknowledged, but haven’t yet been met by formal regulatory responses. 

The FCA might have good reason to take its time. AI is a rapidly changing and powerful technology, and some fear that setting fixed rules could clip its wings and impede British innovation. But AI can also spark its own problems and accentuate existing inequalities, meaning it remains a sticking point for regulators. What’s coming down the line? And can existing rules and regulations stand up to the rapid rise of AI? 

What’s the FCA doing about AI? 

In a speech at the beginning of July, Rathi said: “The use of AI can both benefit markets and can also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.” He promised a pro-business approach from the regulator, saying it would open up its AI “sandbox”, which enables real-world testing of products that aren’t yet compliant with existing regulations, to businesses eager to test out the latest innovations. “As the PM has set out,” said Rathi, “adoption of AI could be key to the UK’s future competitiveness – nowhere more so than in financial services.”

There’s a lot of talk about AI’s promise, but what kinds of risks will the FCA want to avoid? While algorithm-powered financial trading is well established, the big difference, amid the rapid rise of large-scale AI, is the increasingly widespread use of non-traditional data, such as social media behaviour or shopping habits, in consumer-facing financial services like loan-application assessments.

The FCA and other regulators are concerned, in particular, about the prospect of consumer detriment arising from AI models trained on inherently-biased, inadequately-processed, or insufficiently-diverse datasets. “If there are biases or gaps in the data, AI systems are going to perpetuate or entrench inequalities that we already see within society,” says Brami.

Content from our partners

AI will equip the F&B industry for a resilient future

AI will equip the F&B industry for a resilient future

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

There’s also the ever-lurking problem of explainability. Even the creators of AI models sometimes don’t know why they’re making their decisions, but businesses will likely need to be able to explain the reasoning behind their tools and algorithms if they want to avoid a regulatory crackdown. 

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

FCA
The FCA has a special AI Strategy Team, which is charged with exploring the risks and opportunities of the technology. (Image by IB Photography/Shutterstock)

Slow and steady wins the race

Tim Fosh, a financial regulatory lawyer at Slaughter and May, says he certainly doesn’t envy the regulators. “They want to promote competition, which is one of [the FCA’s] new objectives, and they don’t necessarily want to stifle the promise in the area, because it could create considerable opportunities,” says Fosh. Nevertheless, there is heavy pressure to protect consumers and take visible action amid a moment of fraught public discourse around AI, a debate which has intensified since the launch of OpenAI’s GPT-4 large language model and competitors like Google Bard. 

“You don’t want to throw the baby out with the bath water just for the sake of making regulation,” says Fosh. That’s why, he speculates, regulators like the FCA have thus far been cautious about putting forward any formal proposals, even though they’ve been closely examining AI for several years. “Because of the dynamic nature of the industry, they don’t want to be regulating strictly at a point in time when everything is moving so fast,” Fosh says. Instead, he predicts a future marked by interpretive, principles-based guidance — a proposition that’s largely consistent with the UK’s broader sector-led and tech-neutral approach to AI. 

It’s not just the regulators facing an uphill battle. It’s also pretty tricky for businesses, developers and lawyers trying to keep up. “One of the mantras of start-ups is ‘move fast and break things’, but in a regulatory context that’s clearly very dangerous,” says Fosh. “New challenges and constraints are essentially being discovered by firms on a daily basis as they try to put these things in place. They try to comply with their obligations and find that the regulations, as they’re currently drafted, don’t neatly match up with what they’re trying to do.”

There’s a chance that regulators could ultimately require a specific manager within each organisation to take responsibility and accountability for AI in an expansion of the existing UK Senior Managers and Certification Regime, which makes individuals accountable for a company’s conduct and competence. “That’s the key touch point: that the FCA will have a human to hold accountable if something goes wrong,” says Michael Sholem, a partner at Macfarlanes. Nevertheless, this kind of proposition might require some serious governmental support, otherwise, it’s probably not a professional responsibility that too many people would want. “How does that person ever get comfortable?” asks Fosh.

AI might seem new and shiny — as well as perplexing — but the FCA still has a lot of history to fall back on. Indeed, many of the ground rules that will govern AI might already be in place. “The FCA has been very clear that although they’re consulting on how to change the regulatory regime to deal with AI and ML, ultimately the FCA’s Principles for Businesses apply across these activities,” says Sholem. “There’s not, at this time, a need for a fundamental overhaul of everything to do with financial services regulation just to deal with AI and ML.”

Read more: UK government approach to AI leaves workers disadvantaged, Labour says

  •  

  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘