Global Regulatory Brief: Digital finance, May edition | Insights
11 min readUK regulators provide updated approach to AI in financial services
The UK’s financial regulators have provided more detail on their approach to artificial intelligence (AI) and machine learning (ML) in regulated UK financial markets.
First, the Bank of England and the Prudential Regulatory Authority have written to the UK Government to set out their latest approach to AI and ML. Notable takeaways include:
- The PRA and BoE have observed growing adoption of AI/ML within financial services to improve firms’ operational efficiency, better detect fraud and money laundering, and enhance data and analytics capabilities
- The PRA and BoE have been able so far to meet their statutory objectives while supporting the safe adoption of AI/ML in financial services
- While the PRA and BoE take a technology-agnostic approach to AI/ML (i.e. Rules do not usually mandate or prohibit specific technologies) this does not mean they are technology-blind and they will clarify how existing rules and expectations apply where such clarification is needed. Four potential areas of clarification include:
- Data Management
- Model Risk Management
- Governance
- Operational Resilience and Third-Party Risks
- The Financial Policy Committee (FPC) is looking at AI and ML from a UK financial-stability perspective and will further consider these risks over the course of 2024
- Future PRA-BoE engagement with stakeholders on AI may include the establishment of a new AI Consortium
Second, the Financial Conduct Authority (FCA) has also published its response to the UK Government’s white paper on AI. Notable takeaways include:
- General approach: The FCA does not usually mandate or prohibit certain technologies and a more outcomes-focused approach gives firms greater flexibility to adapt and innovate. The FCA considers that many risks related to AI are not necessarily unique to AI itself and can therefore be mitigated within existing legislative and/or regulatory frameworks
- Safety, security, robustness: The FCA recognises overlap with the FCA’s Principles for Business, Threshold Conditions, Senior Management Arrangements, Systems and Controls, and the FCA’s work on operational resilience, outsourcing and critical third parties.
- The adoption of AI may lead to the emergence of third-party providers of AI services who are critical to the financial sector. If that were to be the case, these systemic AI providers could come within scope of the proposed regime for CTPs, if they were designated by HM Treasury
- Relatedly, the FCA are concerned about the competition risks that could arise from the concentration of third-party technology services – such as cloud services and AI-model development – among Big Tech firms
- Fairness: The FCA’s regulatory approach to consumer protection is particularly relevant to fairness in the use of safe AI systems by firms and the Consumer Duty requires firms to play a greater and more proactive role in delivering good outcomes for retail customers
- Where firms use AI systems that process personal data, they will also need to consider obligations under data protection legislation, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018
- Transparency: There are no specific transparency or explainability requirements yet but there is the cross-cutting obligation under the Consumer Duty to act in good faith, which is characterized by honesty, fair and open dealing with retail consumers
- As part of ensuring the processing of personal data is fair and transparent under the UK GDPR, data controllers must provide data subjects with certain information about their processing activities, including the existence of automated decision making and profiling
- Accountability and governance: The FCA’s regulatory framework contains a range of rules and guidance pertaining to firms’ governance and accountability arrangements, which will be relevant to firms using AI safely and responsibly as part of their business models
- The FCA will publish a consultation on the SM&CR in June 2024
- Under the Consumer Duty a firm’s board, or equivalent governing body, should review and approve an assessment, evidenced with data, of whether the firm is delivering good outcomes for its customers under the Consumer Duty and, where it is not delivering good outcomes, detail an action plan to remedy this. The first annual report is due on July 31, 2024
- Contestability and redress: Where a firm’s use of AI results in a breach of the rules, there are a range of mechanisms through which firms can be held accountable and through which consumers can get redress
- The FCA as a user of AI: The FCA is improving how it uses data through its advanced analytics unit, the synthetic data expert group, machine learning in fighting scams and trade surveillance. The FCA is supporting the development of AI surveillance tools for markets through the TechSprint where trade surveillance specialists will be able to develop and test their AI-powered surveillance solutions using the FCA’s extensive trading datasets on the Digital Sandbox platform
- Market abuse: The FCA is particularly interested in how AI can help identify more complex types of market abuse that are currently difficult to detect, such as cross-market manipulation, improve the accuracy of market abuse detection more generally and ultimately transform market abuse surveillance by incorporating anomaly detection
- Data analytics capabilities: In 2021, the FCA appointed its first ever Chief Data, Information and Intelligence Officer to lead the newly created Data, Technology and Innovation (DTI) division. DTI leads the FCA’s response to emerging technological developments in areas such as quantum computing, AI and blockchain, and supports the FCA’s own data and tech capability development
The FCA’s work on AI over the next 12 months: The FCA’s plans for the next 12 months are centered on continuing to further their understanding of AI deployment in UK financial markets by, for example, re-running the third edition of the machine-learning survey, jointly with the Bank of England, to consider AI across system areas.
- Recent developments, such as the rapid rise of Large Language Models (LLMs), for example, put resilience center stage and according to the FCA underlines the importance of regulatory regimes for operational resilience, outsourcing, and critical third parties
- International collaboration remains important and the FCA is closely involved in the work of IOSCO and the Financial Stability Board on AI
- The FCA will continue to invest more into these technologies to proactively monitor markets, including for market surveillance purposes, and are currently exploring potential further use cases involving Natural Language Processing to aid triage decisions, assessing AI to generate synthetic data or using LLMs to analyze and summarize text
- The FCA is actively monitoring advancements in quantum computing and examining the potential benefits for industry and consumers while also considering the impact of the inherent security risks
US and UK sign agreement on advanced AI models
The United States and the United Kingdom signed a Memorandum of Understanding (MOU) which outlined how the two countries will work together on the development of testing for advanced AI models.
- The MOU also states that the two countries will continue to work on aligning their scientific approaches, and will work together to “accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.”
- Further, the countries have put together plans to build out a common approach to AI safety testing, as well as to share their existing capabilities to address safety concerns
- The U.S. and UK AI Safety Institutes also plan to conduct at least one joint testing exercise on a publicly-available model, while also exploring exchanging personnel
- The MOU goes into effect immediately and is reflective of the urgency around AI safety expressed by governments across the globe
Hong Kong Monetary Authority publishes observations on transaction monitoring and RegTech
The Hong Kong Monetary Authority (HKMA) has published a report on how financial institutions have reduced anti-money laundering (AML) system inefficiencies and their use of machine learning and other Regtech tools to aid effective risk management.
In summary: The thematic review covered firm’s processes for design, implementation and optimization of transaction monitoring systems, including:
- Management oversight and governance
- Assessment of the risk coverage of the TM system and selection of detection scenarios
- Identification of Critical Data Elements (CDE)
- Data quality and lineage testing
- Customer segmentation
- Threshold setting and tuning
- Functional testing
- Periodic review
- Optimization using Regtech, including AI
Important context: The HKMA has been focusing for a number of years on how machine learning and AI can help address excessive false positive alerts in transaction monitoring and screening.
- Regarding the implications posed by AI for risk management challenges, the HKMA issued guidance in 2019 in the form of high-level principles on the use of AI applications
- Firms considering deploying AI are invited to clarify regulatory expectations in the HKMA’s Fintech Supervisory Chatroom or through the Fintech Supervisory Sandbox
Dubai International Financial Centre enacts Digital Assets legislation
Dubai International Financial Centre (DIFC) has enacted its Digital Assets Law to cater for the consequences of the new digital assets regime and revised security regime.
The intention: The legislative enactments aim to ensure DIFC Laws keep pace with the rapid developments in international trade and financial markets arising from these technological developments and to provide legal certainty for investors in and users of digital assets.
The amendments represent the first legislative enactment to comprehensively set out the legal characteristics of digital assets as a matter of property law, and to provide for how digital assets may be controlled, transferred and dealt with by interested parties.
International context: While the primary focus in many jurisdictions has been to regulate and impose enforcement related sanctions on digital assets from a regulated financial services perspective, the DIFC is seeking to nurture potential benefits associated with blockchain technology and their application across a wide spectrum of use cases.
Following extensive review of the legal approaches taken to digital assets in multiple jurisdictions, and a period of public consultation in 2023, DIFC is now enacting its own Digital Assets Law.
Securities Commission Malaysia to strengthen market surveillance using AI
The Securities Commission Malaysia (SC) outlined plans in its annual report to use supervisory technology (SupTech) and data analytics to enhance surveillance and supervision.
Proprietary AI-powered surveillance tool: SC has developed an AI-powered tool, the PLC360, to more effectively provide oversight of public listed companies (PLCs) and auditors by performing the following tasks:
- Predict likelihood of misconduct
- Detect key risk areas and identify emerging trends
- Monitor financial health for all PLCs
- Connect multiple data sources
- Identify connections between auditors and PLCs
Israeli securities authority to incorporate AI in its oversight and enforcement
The Israeli Securities Authority (ISA) has identified the incorporation of AI in oversight and enforcement as a key strategic priority for the regulator over the coming year.
In more detail: This commitment to modernizing Israel’s regulatory infrastructure for capital markets is part of the ISA’s 2023 Annual Report in which the ISA seeks to increase the credibility of the market through enforcement measures to ensure investor protection and address dynamic market demands. A number of recent key measures stand out:
- Completion of the licensing of financial information providers
- The promotion of the innovative Payment Services Law
- The development of Israel’s money market through the promotion of money market funds, which – among other things – address interest rate hikes
- Extensive oversight and enforcement actions with respect to corporations, financial institutions and stock exchange trading activity
Looking ahead: The ISA will update its concept of deterrence and enforcement, advance the law for improving regulatory efficiency, develop the stock exchange and improve trading liquidity, make additional instruments for capital-raising more accessible to a greater number of entities, and make the capital market more accessible to retail investors.
Bipartisan proposal aims to curtail “extreme AI risks”
U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) unveiled the first congressional framework to deal exclusively with the “extreme risks” as advanced AI models continue to be developed.
- The framework would establish federal oversight of frontier model hardware, development and deployment. This oversight mechanism is aimed at mitigating “AI-enabled extreme risks from biological, chemical, cyber, and nuclear threats.”
- Congress has seen a flood of AI-related bills during this session and there is no reason to expect any change in that cadence anytime soon
- Legislators are attempting to deal with the wide array of opportunities and risks posed by advanced AI models, particularly generative AI
- While it is not clear which, if any, standalone proposals would pass either house of Congress, it is not unreasonable to expect AI-related provisions to work their way into other bills throughout the course of the year
UK ICO seeks views on accuracy of generative-AI models
The UK Information Commissioner’s Office (ICO) has launched the latest installment in its consultation series examining how data protection law applies to the development and use of generative AI.
In summary: The third consultation in the series focuses on how data protection’s accuracy principle applies to the outputs of generative-AI models, and the impact that accurate training data has on the output.
This follows concern that reliance on generative-AI models to provide factually accurate information about people can lead to misinformation, reputational damage and other harms.
Important context: The third call comes as the UK Information Commissioner visits leading tech firms in Silicon Valley to reinforce the ICO’s regulatory expectations around generative AI, as well as seeking progress from the industry on children’s privacy and online tracking.
The regulator has already considered the lawfulness of web scraping to train generative-AI models and examined how the purpose limitation principle should apply to generative-AI models.
Looking ahead: The consultation is open until May 10, 2024 and further consultations on information rights and controllership in generative AI will follow in the coming months.
Japanese regulators publish cybersecurity self-assessments at regional financial institutions
The Bank of Japan (BOJ) and Financial Services Agency (FSA) published the results of a cybersecurity self-assessment for regional financial institutions.
Context: The growing threat of cyber attacks underlines the need for Japanese financial institutions to develop and ensure the effectiveness of cybersecurity management systems.
Survey results: The survey results show that many regional financial institutions understand that ensuring cybersecurity is an important management issue and are introducing both technological and organizational measures to improve cybersecurity measures.
However, the survey confirmed that there are still issues with securing and training cybersecurity personnel and managing third-party risks.
Looking ahead: The BOJ and FSA plan to continue strengthening their cybersecurity management systems through on-site examinations, inspections and monitoring.
Senators ask CFTC Chair for “accounting” of interactions with Sam Bankman-Fried
U.S. Senators Elizabeth Warren (D-MA) and Chuck Grassley (R-IA) sent a letter to CFTC Chairman Rostin Benham asking for “an accounting of all meetings and correspondence between you and Sam Bankman-Fried during your tenure.”
Benham has previously spoken about his interactions with the now disgraced former head of crypto exchange FTC, but Warren and Grassley appear to think Benham has not been fully forthcoming about his interactions with Bankman-Fried.
During testimony in 2022 Benham stated that over a 14-month period, Benham and his team met with Bankman-Fried as many as 10 times and had exchanged “a number of messages”.
Context: At the time, Bankman-Fried was attempting to gain approval for LedgerX, a division of FTX, to directly handle margin derivatives trading for customers without any go-between firm.
The letter asked for responses from Benham by April 29, 2024, although as of this writing it remains to be seen if the request has been satisfied.
link