FCA Board Focuses on AI

by Stuart Davis, Fiona M. Maclean, Gabriel Lakeman, and Imaan Nazir

Photos of the authors.

From left to right: Stuart Davis, Fiona M. Maclean, Gabriel Lakeman, and Imaan Nazir. (Photos courtesy of Latham & Watkins LLP)

A new publication from the UK’s financial regulator signals to firms that they should take steps to manage risks in the use of AI.

The UK’s Financial Conduct Authority (FCA) has published its latest board minutes highlighting its increasing focus on artificial intelligence (AI), in which it “raised the question of how one could ‘foresee harm’ (under the new Consumer Duty), and also give customers appropriate disclosure, in the context of the operation of AI”. This publication indicates that AI continues to be a key area of attention within the FCA. It also demonstrates that the FCA believes its existing powers and rules already impose substantive requirements on regulated firms considering deploying AI in their services.

In particular, the FCA’s new Consumer Duty imposes broad “cross-cutting” obligations on firms, including the obligation to avoid causing foreseeable harm to consumers, and the FCA has enforcement powers to ensure firms comply with these requirements. Although the board minutes state that the FCA “considered it was important to discuss opportunities for achieving good outcomes for customers, integrity in markets, as well as efficiencies in firms,” the clear focus of the FCA’s discussion is on consumer protection.

This position sits within the context of UK regulators’ continuing attention to the impact of AI on financial services and its consequences for consumers, and follows two other significant developments in this area:

  • the FCA’s August 2022 Discussion Paper with the Bank of England and the Prudential Regulation Authority, which considered the benefits of AI in financial services and the relevance of regulatory requirements in mitigating associated risks; and
  • the July 2023 speech by Nikhil Rathi, the FCA’s Chief Executive, setting out the FCA’s regulatory approach to Big Tech and AI.

The board minutes reference the July 2023 speech, which also emphasised that the FCA regards existing UK regulatory frameworks as addressing many AI-associated risks — in that case highlighting both the Consumer Duty and the Senior Managers & Certification Regime (SMCR) which imposes requirements for senior managers to be ultimately accountable for a firm’s activities.

The FCA has in the past used its board minutes to flag its high-level regulatory approach, and the current board minutes are therefore significant in providing a clear signal to firms that the FCA expects them to take steps to manage risks in the use of AI under the existing regulatory regime, even while discussions are ongoing around the extent to which new rules will be required to address specific AI issues.

Stuart Davis and Fiona M. Maclean are Partners and Gabriel Lakeman and Imaan Nazir are Associates at Latham & Watkins LLP. This post was originally published on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).