by Marc Gilman

Photo courtesy of the author
On December 5, 2024, the U.S. Commodity Futures Trading Commission (the “CFTC,” or the “Commission”) staff issued an advisory related to the use of artificial intelligence (“AI”) by CFTC-registered entities and registrants (the “Advisory”). In tandem, two CFTC representatives – Chairman Rostin Behnam and Commissioner Kristin N. Johnson – released statements supporting the Advisory and offering thoughts about the current and future implications of AI on CFTC registrants. This blog post will summarize the contents of the Advisory as well as the related statements of the CFTC representatives to collect a set of practical considerations for designing CFTC compliance programs to meet evolving regulatory expectations for the use of AI.
Background
The CFTC has been considering the implications of AI on its registrants for several years, having issued a request for comments regarding the use of AI in January of 2024[1] and the creation of an AI Task Force, among other interactions with market participants, other regulators, and government stakeholders. As Chairman Behnam states, the Advisory further supports these efforts: “[g]iven the dynamic nature of artificial intelligence and the growing integration of AI in derivatives markets, the advisory is a measured first step to engage with the marketplace and ensure ongoing compliance with the Commodity Exchange Act and the CFTC’s regulations.”
The Advisory comprises two sections–first, a section on the implications of AI on the Commodities Exchange Act and CFTC Regulations and, second, details about the CFTC staff’s ongoing monitoring of the development and implementation of AI technologies.
At the outset, the CFTC reminds market participants that the use of AI is equivalent to the use of any other technology and must be employed consistent with existing rulesets:
Staff expects that CFTC-regulated entities will assess the risks of using AI and update policies, procedures, controls, and systems, as appropriate, under applicable CFTC statutory and regulatory requirements. In addition, as with all material system or process changes, a CFTC-regulated entity should ensure that its adoption of AI has been reviewed for compliance with the CEA and CFTC regulations.
Chairman Behnam echoes this sentiment in his statement noting that the “advisory is emblematic of the CFTC’s technology-neutral approach, which balances market integrity with responsible innovation in the derivatives markets. As firms may thread AI into the fabric of nearly every aspect of their operations, staff intends to monitor for any risks from AI that may merit policy or regulatory consideration.”
Moreover, because of the dynamic nature of AI and its potential use across a range of use cases–only some of which are covered in the Advisory–the CFTC reminds readers that the Advisory “is not a compliance checklist or substitute for appropriate risk assessments or governance by a CFTC-regulated entity. Rather, the Advisory provides a non-exhaustive list of existing statutory and regulatory requirements that may be potentially implicated by CFTC-regulated entities’ use of AI.”
The Use of AI by CFTC-regulated Entities
The Advisory breaks down CFTC registrants into three broad buckets and analyzes the potential use cases and regulatory concerns applicable to each.
For Designated Contract Markets (“DCMs”), Swap Execution Facilities (“SEFs”), and Swap Data Repositories (“SDRs”) entities, the CFTC highlights three core use cases for AI. Since these entities’ primary purpose is providing forums for CFTC transactions and related activities, the use of AI is considered in the context of order processing and trade matching, market surveillance, and system safeguards. With respect to order processing, trade matching, and market surveillance, the CFTC discusses the use of AI to “anticipate trades before they happen, for the purposes of allocating system resources in advance to optimize those resources and reduce post-trade message latencies” as well as “detection of abusive trading practices, investigation of rule violations, and real-time market monitoring.”
From a system controls standpoint, the CFTC references several controls that are consistently tested as part of industry standard frameworks like NIST, SOC, and ISO. Controls include “enterprise risk management and governance, information security, business continuity and disaster recovery planning and resources, capacity and performance planning, systems operations, systems development and quality assurance, and physical security and environmental controls.” Given cybersecurity’s import across CFTC entities and the financial sector as a whole, system controls are referenced throughout the Advisory as important considerations for the implementation of AI technologies.
Next, the Commission analyzes AI use cases for Derivatives Clearing Organizations (“DCOs”), which include member assessment and interaction, settlement, and, again, system safeguards. With respect to member assessment, several uses are contemplated including “in connection with the review of its clearing members’ compliance with DCO rules, and to support communications with its members on any number of topics. AI chatbots may have access to external or internal datasets, including datasets comprised of member information.” On the settlement front, AI might be used to “facilitate netting or offset of positions as the AI works to validate data, mine data anomalies prior to settlement, or identify failed trades.” The uses for system controls are consistent with those outlined in the prior section and contemplate AI’s role to support business continuity and ensure the secure operation of critical systems.
Finally, the CFTC considers how market participants–Swap Dealers (“SDs”), Futures Commission Merchants (“FCMs”), Commodity Pool Operators (“CPOs”), and others–might use AI. The CFTC articulates three core areas where AI may be used by these entities: risk assessment and risk management, compliance and recordkeeping, and customer protection. From a compliance perspective, the CFTC notes that AI “may support the accuracy and timeliness of financial information and risk disclosures that are provided to the Commission, National Futures Association, and the registrant’s customers.” The Commission reminds entities that these uses are subject to existing mandates noting that “[f]or example, a CPO that used generative AI to update a disclosure document or prepare periodic account statements for a commodity pool would still be subject to all the requirements of Part 4 of the Commission’s regulation.” Regarding customer protection, the CFTC points to the segregation of customer funds as one area where AI may be applied to bolster protections while reminding market participants that the use of AI in these scenarios is subject to Part 1 of the CFTC’s regulations. Regarding risk assessment and management, the CFTC observes that using AI to calculate initial and variation margin for swaps.
Compliance Considerations
From a compliance perspective, a clear throughline in the Advisory is the Commission’s expectation that its existing rules and regulations will apply to any use of AI. What this means in practice is that organizations must consider the potential ramifications of existing CFTC rules when developing and deploying AI for supporting regulated business.
The specific regulatory references in the Advisory point toward the CFTC’s expectations for demonstrable compliance in scenarios where AI is deployed. For example, the reference to Part 1 of the CFTC’s regulations as applicable broadly to the use of AI for customer protection for SDs, FCMs and other registrants essentially invokes a wide range of requirements such as financial reporting, risk management, custody of customer assets, liability, and recordkeeping. Part 1 must be top of mind for registrants as they apply AI to these areas of reporting, risk management, and management of customer assets.
Similarly expansive statements are included in references to adherence with the Commission’s Core Principles for SEFs and DCMs. The Core Principles references are not pinpoint cites intended to remind organizations of highly specialized or technology-specific mandates, but rather are overarching obligations intended to protect firms, customers, and markets. The use of AI applications for SEF market surveillance described in the Advisory includes references to several sections of the Core Principles including 2 (Compliance with Rules), 3 (Swaps Not Readily Susceptible to Manipulation), and 4 (Monitoring of Trading and Trade Processing). Taken together, these Core Principles references bind SEFs to use AI in a manner consistent with the collective expectations of SEFs generally–there are no carve outs or exceptions for the use of AI in regulated scenarios.
Given these broad expectations, how can organizations approach the development and deployment of AI for meaningful regulated use?
Registrants should incorporate a basic risk assessment framework and apply it to the lifecycle of the use of AI. This may require compliance officers to adapt a software development lifecycle-like mindset to risk assessment, but that is a good thing. An AI risk assessment means considering the entire lifecycle of AI from the development and training of AI features to the controls for deployment and ongoing monitoring of AI features post-implementation. This software development-influenced approach will prevent a “set it and forget it” approach to AI in favor of a stepwise strategy for the creation and use of AI that can evolve and adapt to its operation in complex technology environments.
The following are a few points compliance teams should consider in assessing the risk of AI applications and other issues related to deployment, use, and oversight.
- Understand how AI tools are developed and maintained at your organization so that a regulatory analysis can be applied at the early stages. Cross-functional teams from IT, compliance, legal, risk, and audit must coordinate throughout the process. Compliance and legal teams must understand the potential regulatory risks posed by AI applications to advise on appropriate use and potential modifications to meet applicable obligations.
- Organizations should consider creating policies and procedures describing how AI models are developed, deployed, and reviewed on a continuous basis to ensure adequate oversight. These policies will help organizations account for changes in the training of AI models as well as related ramifications for how models are used. So, for example, if an AI application is used to support a particular asset class or customers in a particular jurisdiction and those variables change, the AI models can be reviewed and additional actions taken to remediate any resulting compliance issues.
- Compliance teams should assist in the development and review of explainability documentation describing how AI applications function in simple and direct language. Basic explainability documentation provides regulators like the CFTC as well as customers and others interacting with AI applications transparency regarding how the application makes decisions and fundamentals about how the applications are trained and maintained. Explainability also assists users in interpreting the results or feedback of AI-enabled processes or decisions.
- Critically, compliance officers must consider related business continuity, disaster recovery, and incident response protocols associated with the use of AI. AI applications that support core business operations must be evaluated to ensure that they can be relied upon.
- Testing is essential. Any AI application deployed to support the operation of critical firm systems or applications that provide features for customers or counterparties must be routinely tested to ensure that in the event of an outage or availability issue, systems can failover and function appropriately.
- Compliance teams should also review existing incident response plans to consider if the use of AI may trigger new or novel reporting or remedial activities in the event of a security issue or data breach. The CFTC and other federal, state or local entities may require notifications in certain scenarios, so considering these obligations as they relate to the use of AI is key.
What the Future Holds
Commissioner Johnson provides a glimpse into the future in her statement noting that, in addition to the Advisory and related guidance, she is advocating for: “an AI Fraud Task Force, enhanced information gathering on the use and adoption of AI technologies by market participants, the development of an interagency task force among market and prudential regulators, and a formal policy of enhancing penalties for bad actors who use AI to lure vulnerable investors into handing over their hard-earned cash to fraudsters conjuring up deepfake investment schemes using easily and cheaply acquired or adapted generative AI technologies.”
Clearly, there remains much to be done to develop compliance and organizational frameworks for managing the multitude of risks posed by AI. While the creation and staffing of an AI Fraud Task Force and other interagency collaboration may take time, firms can uplevel and adapt existing protocols to encompass the risks of the use of AI applications in the near term to confirm with existing CFTC expectations and establish good baseline practices to meet emerging regulatory and operational expectations.
Footnotes
[1] Theta Lake, Inc. submitted a response to the RFI, which is publicly available on the CFTC’s website. The views in this blog post are my own and do not reflect any interactions or guidance from the CFTC.
Marc Gilman is General Counsel and VP of Compliance at Theta Lake as well as an adjunct professor at Fordham University School of Law.
The views, opinions and positions expressed within all posts are those of the authors alone and do not represent those of the Program on Corporate Compliance and Enforcement or of New York University School of Law. The accuracy, completeness and validity of any statements made within this article are not guaranteed. We accept no liability for any errors, omissions or representations. The copyright of this content belongs to the authors and any liability with regards to infringement of intellectual property rights remains with them.
