小程序
传感搜
传感圈

Progress on regulating AI in financial services is slow, perhaps for good reason

2023-07-30 14:23:25
关注

  •  

AI isn’t anything new in the financial services sector. For several years, the technology has been used to support everything from automated stock trading to processing loan applications using credit-scoring algorithms. But AI’s rapid technological and commercial growth, which has enabled companies to process vast quantities of raw data, is a troubling prospect for regulators, especially those, like the Financial Conduct Authority (FCA), that are charged with ensuring honesty and fairness from technology often branded a ‘black box’ due to the opaque way it operates.

AI Robot behind bars
Regulators have been tasked to ensure consumer protection without stunting business innovation. (Image by Shutterstock)

“I think one of the biggest challenges with regulation is the pace at which technology is evolving,” says Klaudija Brami, who works on legal technology at law firm Macfarlanes. “You’ve got this cat-and-mouse situation between technology development and regulation.”

While the EU has attempted to craft an all-encompassing, cross-sectoral set of regulations in the form of the AI Act, the UK is taking a more hands-off, principles-based approach. Individual regulators, like the FCA, are essentially being asked to cultivate responses to the technology on a sector-by-sector basis – a strategy that’s intended to offer a dynamic, pro-innovation environment to help fulfil Rishi Sunak’s pitch to make the UK a global hub of AI regulation.

But there’s still a long way to go. The FCA and the Bank of England issued their latest discussion paper on AI last October, a month before ChatGPT saw the light of day and threw AI into the global limelight. Since then, the promises and risks of AI have only grown more prominent, as the FCA’s chief executive, Nikhil Rathi, recently acknowledged, but haven’t yet been met by formal regulatory responses. 

The FCA might have good reason to take its time. AI is a rapidly changing and powerful technology, and some fear that setting fixed rules could clip its wings and impede British innovation. But AI can also spark its own problems and accentuate existing inequalities, meaning it remains a sticking point for regulators. What’s coming down the line? And can existing rules and regulations stand up to the rapid rise of AI? 

What’s the FCA doing about AI? 

In a speech at the beginning of July, Rathi said: “The use of AI can both benefit markets and can also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.” He promised a pro-business approach from the regulator, saying it would open up its AI “sandbox”, which enables real-world testing of products that aren’t yet compliant with existing regulations, to businesses eager to test out the latest innovations. “As the PM has set out,” said Rathi, “adoption of AI could be key to the UK’s future competitiveness – nowhere more so than in financial services.”

There’s a lot of talk about AI’s promise, but what kinds of risks will the FCA want to avoid? While algorithm-powered financial trading is well established, the big difference, amid the rapid rise of large-scale AI, is the increasingly widespread use of non-traditional data, such as social media behaviour or shopping habits, in consumer-facing financial services like loan-application assessments.

The FCA and other regulators are concerned, in particular, about the prospect of consumer detriment arising from AI models trained on inherently-biased, inadequately-processed, or insufficiently-diverse datasets. “If there are biases or gaps in the data, AI systems are going to perpetuate or entrench inequalities that we already see within society,” says Brami.

Content from our partners

AI will equip the F&B industry for a resilient future

AI will equip the F&B industry for a resilient future

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

Insurance enterprises must harness the powers of data collaboration to achieve their commercial potential

How tech teams are driving the sustainability agenda across the public sector

How tech teams are driving the sustainability agenda across the public sector

There’s also the ever-lurking problem of explainability. Even the creators of AI models sometimes don’t know why they’re making their decisions, but businesses will likely need to be able to explain the reasoning behind their tools and algorithms if they want to avoid a regulatory crackdown. 

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

FCA
The FCA has a special AI Strategy Team, which is charged with exploring the risks and opportunities of the technology. (Image by IB Photography/Shutterstock)

Slow and steady wins the race

Tim Fosh, a financial regulatory lawyer at Slaughter and May, says he certainly doesn’t envy the regulators. “They want to promote competition, which is one of [the FCA’s] new objectives, and they don’t necessarily want to stifle the promise in the area, because it could create considerable opportunities,” says Fosh. Nevertheless, there is heavy pressure to protect consumers and take visible action amid a moment of fraught public discourse around AI, a debate which has intensified since the launch of OpenAI’s GPT-4 large language model and competitors like Google Bard. 

“You don’t want to throw the baby out with the bath water just for the sake of making regulation,” says Fosh. That’s why, he speculates, regulators like the FCA have thus far been cautious about putting forward any formal proposals, even though they’ve been closely examining AI for several years. “Because of the dynamic nature of the industry, they don’t want to be regulating strictly at a point in time when everything is moving so fast,” Fosh says. Instead, he predicts a future marked by interpretive, principles-based guidance — a proposition that’s largely consistent with the UK’s broader sector-led and tech-neutral approach to AI. 

It’s not just the regulators facing an uphill battle. It’s also pretty tricky for businesses, developers and lawyers trying to keep up. “One of the mantras of start-ups is ‘move fast and break things’, but in a regulatory context that’s clearly very dangerous,” says Fosh. “New challenges and constraints are essentially being discovered by firms on a daily basis as they try to put these things in place. They try to comply with their obligations and find that the regulations, as they’re currently drafted, don’t neatly match up with what they’re trying to do.”

There’s a chance that regulators could ultimately require a specific manager within each organisation to take responsibility and accountability for AI in an expansion of the existing UK Senior Managers and Certification Regime, which makes individuals accountable for a company’s conduct and competence. “That’s the key touch point: that the FCA will have a human to hold accountable if something goes wrong,” says Michael Sholem, a partner at Macfarlanes. Nevertheless, this kind of proposition might require some serious governmental support, otherwise, it’s probably not a professional responsibility that too many people would want. “How does that person ever get comfortable?” asks Fosh.

AI might seem new and shiny — as well as perplexing — but the FCA still has a lot of history to fall back on. Indeed, many of the ground rules that will govern AI might already be in place. “The FCA has been very clear that although they’re consulting on how to change the regulatory regime to deal with AI and ML, ultimately the FCA’s Principles for Businesses apply across these activities,” says Sholem. “There’s not, at this time, a need for a fundamental overhaul of everything to do with financial services regulation just to deal with AI and ML.”

Read more: UK government approach to AI leaves workers disadvantaged, Labour says

  •  

参考译文
在金融服务业监管人工智能方面进展缓慢,或许是有其充分理由的。
人工智能在金融服务行业并不是什么新鲜事。多年来,这项技术一直被用于支持从自动化股票交易到利用信用评分算法处理贷款申请等各种任务。但人工智能在技术和商业领域的快速发展,使企业能够处理大量原始数据,这对监管机构来说却是一个令人担忧的趋势,特别是像金融行为监管局(FCA)这样的监管机构,它们肩负着确保技术运作诚实、公平的职责。由于人工智能运行过程常常不透明,因此常被批评为“黑箱”。监管机构的任务是在保护消费者权益的同时,不阻碍商业创新。(图片由Shutterstock提供)“我认为监管面临的最大挑战之一,就是技术发展的速度。”律师事务所Macfarlanes负责法律科技的Klaudija Brami表示,“在技术发展和监管之间,形成了一种猫鼠博弈的关系。”尽管欧盟试图通过《人工智能法案》制定一套全面且跨行业的监管规则,但英国采取的则是一种更为放手、以原则为基础的方法。本质上,像FCA这类个别监管机构被要求逐行业回应这项技术,这一策略旨在营造一个动态且支持创新的环境,从而帮助英国首相里希·苏纳克实现将英国打造为全球人工智能监管中心的目标。但要实现这一目标仍有很长的路要走。FCA和英格兰银行去年10月发布了关于人工智能的最新讨论文件,而就在一个月前,ChatGPT才首次问世并让人工智能成为全球焦点。自那以来,人工智能的承诺和风险愈发突出,正如FCA首席执行官Nikhil Rathi最近所指出的那样,但正式的监管回应仍未出台。FCA或许有充分的理由放慢脚步。人工智能是一项快速变化且强大的技术,一些人担心,制定固定规则可能会抑制其潜力并阻碍英国的创新。但人工智能也可能引发自身的问题,并加剧现有的不平等现象,因此它仍然是监管的难点。接下来会发生什么?现有的规则和监管是否能应对人工智能的迅速崛起?FCA正在如何应对人工智能?7月初的一次演讲中,Rathi表示:“人工智能的使用既可以为市场带来益处,但如果无约束地使用,也可能导致市场失衡和风险,影响市场的完整性、价格发现、透明度和公平性。”他承诺监管机构将采取支持企业的态度,并宣布向渴望测试最新创新的企业开放其人工智能“沙盒”——这个“沙盒”允许对尚未符合现有法规的产品进行现实测试。“正如首相所指出的,”Rathi表示,“人工智能的采用对英国的未来竞争力至关重要——尤其在金融服务领域更是如此。”关于人工智能的承诺有很多,但FCA又想避免哪些风险呢?虽然由算法驱动的金融交易已经相当成熟,但随着大规模人工智能的迅速崛起,一个显著的区别在于,非传统数据(例如社交媒体行为或购物习惯)在面向消费者的金融服务(如贷款申请评估)中的使用越来越普遍。FCA和其他监管机构特别担心的问题是,由于人工智能模型依赖于本身就存在偏见、处理不当或多样性不足的数据集,可能会导致消费者的权益受损。“如果数据中存在偏见或空白,人工智能系统就可能延续甚至加剧我们社会中已经存在的不平等现象。”Brami说道。我们的合作伙伴提供的内容:人工智能将为食品与饮料行业打造更具韧性的未来。保险企业必须利用数据协作的力量,实现其商业潜力。科技团队如何在公共部门中推动可持续发展议程。还有一个一直潜伏的问题:可解释性。即便是人工智能模型的创造者,有时也不清楚它们为何做出某些决策,但如果企业想避免受到监管打击,他们很可能需要能够解释其工具和算法背后的逻辑。查看所有通讯稿订阅我们的通讯稿,数据、洞察和分析直接送达您由Tech Monitor团队提供,订阅请在此处填写FCA有一个专门的人工智能战略团队,负责探索该技术的风险和机遇。(图片由IB Photography/Shutterstock提供)慢工出细活Slaughter and May律师事务所的金融监管律师Tim Fosh表示,他绝对不会羡慕监管机构。“他们希望促进竞争,这是FCA的新目标之一,他们并不一定想扼杀这个领域内的潜力,因为这可能会带来巨大的机遇,”Fosh表示。然而,在公众舆论对人工智能高度关注的背景下,监管机构面临巨大的保护消费者和采取可见行动的压力,这种讨论自OpenAI的GPT-4大语言模型和Google Bard等竞争对手推出以来愈发激烈。“你不想为了制定监管而因小失大,”Fosh表示。这就是为什么他推测,像FCA这样的监管机构到目前为止对提出任何正式提案保持谨慎,即使他们已经密切研究人工智能多年。“由于行业的动态性,他们并不希望在技术发展迅速的时期实施严格监管,”Fosh表示。相反,他预测,未来将出现更具解释性和原则性的指导——这一设想基本上与英国更广泛的行业主导型和技术中立型人工智能监管策略一致。面临的挑战并不只是监管机构。对试图跟上变化的企业、开发者和律师来说,这也是一个艰巨的任务。“初创企业的口号之一是‘快速行动,打破事物’,但在监管背景下,这显然是非常危险的,”Fosh说道。“当企业试图实施这些技术时,它们每天都会发现新的挑战和限制。他们试图遵守义务,却发现现有监管并不完全符合他们所做的事情。”监管机构最终可能会要求每一家组织设立一名专门负责人工智能的经理,作为英国现有“高级管理人员和认证制度”的扩展,该制度让个人对公司的行为和能力负责。“这才是关键点:如果出了问题,FCA必须有人可以追责,”Macfarlanes律师事务所的合伙人Michael Sholem表示。然而,这类设想可能需要政府的大力支持,否则这可能不是很多人愿意承担的专业责任。“那个人怎么可能感到安心呢?”Fosh问道。人工智能看起来新奇而耀眼,同时也让人困惑,但FCA仍有相当多的历史可作参考。事实上,许多将规范人工智能的基本规则可能已经存在。“FCA已经明确表示,尽管他们正在咨询如何修改监管体系以应对人工智能和机器学习,但最终FCA的《企业原则》适用于这些活动,”Sholem表示。“目前,并不需要对所有金融服务监管进行根本性改革,仅仅是为了应对人工智能和机器学习。”阅读更多:英国政府对人工智能的立场使工人处于不利地位,工党表示
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘