by Melissa MacGregor (SIFMA), Avi Gesser, Matt Kelly, Stephanie Thomas, and Ned Terrace
The proliferation of AI tools and rapid pace of AI adoption have led to calls for new regulation at all levels. President Biden recently said “[w]e need to manage the risks [of AI] to our society, to our economy, and our national security.” The Senate Judiciary Subcommittee on Privacy, Technology and the Law recently held a hearing on “Rules for Artificial Intelligence” to discuss the need for AI regulation, while Senate Majority Leader Schumer released a strategy to regulate AI.
The full benefits of AI can only be realized by ensuring that AI is developed and used responsibly, fairly, securely, and transparently to establish and maintain public trust. But it is critical to find the right mix of high-level principles, concrete obligations, and governance commitments for effective AI regulation.
As the leading industry trade association representing broker-dealers, investment banks, and asset managers operating in the U.S. and global capital markets, SIFMA has proposed a practical, risk-based approach to regulating AI that contains strong accountability measures for high-risk AI uses, while providing flexibility to allow industry to innovate. At its core, the SIFMA approach would require companies, under the supervision of their sectoral regulators, to (1) identify how AI is being used, (2) determine which AI uses pose the highest risks, (3) have qualified persons or committees at the company review high-risk AI applications and determine whether the risks are too high, and if so, (4) provide meaningful mitigation steps to reduce those risks to an acceptable level or require that the AI application be abandoned.
To achieve these objectives, any AI regulation should include the following components:
- Scoping. Companies should determine which AI applications are in scope of the framework when building their governance programs.
- Inventory. Companies should prepare and maintain an inventory of their AI applications with sufficient detail to allow them to be risk rated.
- Risk Rating. Companies should have a process for identifying their highest-risk AI applications. The risks considered would include legal and regulatory risks, including operational, reputational, contractual, discrimination, cybersecurity, privacy, consumer harm, lack of transparency, and confidentiality risks.
- Responsible Persons or Committees. Companies should designate one or more individuals or committees who are responsible for identifying and assessing their highest-risk AI applications, and either accepting those risks, mitigating them, or abandoning the particular AI application because the risks are too high.
- Training. Companies should develop training programs to ensure that stakeholders are able to identify the risks associated with their AI use and the various options for reducing risk.
- Documentation. Companies should maintain documentation sufficient for an audit of the risk assessment program.
- Audit. Companies should conduct periodic audits that focus on the effectiveness of the risk assessment program, rather than on individual AI applications. Companies should be permitted to determine how and when audits should be conducted, and who can conduct those audits.
- Third-Party Risk Management. Companies should use the same risk-based principles that are applied to in-house AI applications to evaluate third-party AI applications, and mitigate those risks through diligence, audits, and contractual terms.
This proposed framework could be incorporated into existing governance and compliance programs in related areas such as model risk, data governance, privacy, cybersecurity, vendor management, and product development, with further guidance from applicable sectoral regulators as needed. Further, having qualified persons identify, assess, and mitigate the risks associated with the highest-risk AI uses improves accountability, appropriate resource allocation, and employee buy-in through clearly defined and fair processes.
Given the rapid rate of AI adoption and its potential societal impact, policymakers are facing increased pressure to enact AI regulation. SIFMA’s risk-based approach would provide a valuable, flexible framework through which companies and their sectoral regulators can build tailored AI governance and compliance programs that ensure accountability and trust without stifling innovation or wasting time or resources on low-risk AI applications.
Melissa MacGregor is Deputy General Counsel and Corporate Secretary at SIFMA and Avi Gesser is a Partner, Matt Kelly is Counsel, and Stephanie Thomas and Ned Terrace are Associates at Debevoise & Plimpton LLP. Debevoise assisted SIFMA in preparing its response to the National Telecommunications and Information Administration Request for Comment on AI Accountability Policy, which is the basis for this blog post. This post first appeared on the Debevoise’s data blog and on SIFMA’s website. The authors would like to thank Debevoise Summer Law Clerk Esther Tetruashvily for her contribution to this blog post.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).