Tag Archives: Jackie Dorward

Treasury’s Report on AI (Part 2) – Managing AI-Specific Cybersecurity Risks in the Financial Sector

by Avi Gesser, Erez Liebermann, Matt Kelly, Jackie Dorward, and Joshua A. Goland

Photos of authors.

Top: Avi Gesser, Erez Liebermann, and Matt Kelly. Bottom: Jackie Dorward and Joshua A. Goland (Photos courtesy of Debevoise & Plimpton LLP)

This is the second post in the two-part Debevoise Data Blog series covering the U.S. Treasury Department’s report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (the “Report”).

In Part 1, we addressed the Report’s coverage of the state of AI regulation and best practices recommendations for AI risk management and governance. In Part 2, we review the Report’s assessment of AI-enhanced cybersecurity risks, as well as the risks of attacks against AI systems, and offer guidance on how financial institutions can respond to both types of risks.

Continue reading

Overview of Global AI Regulatory Developments and Some Tips to Reduce Risk

by Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, Samuel Allaman, Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace

Photos of the authors

Top row from left to right: Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, and Samuel Allaman
Bottom row from left to right: Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace (photos courtesy of Debevoise & Plimpton LLP)

With last week’s political deal in European Parliament to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), Europe is one step closer to enacting the world’s first comprehensive AI regulatory framework. Yet while the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In recent months, the U.S., Canada, Brazil, and China have all introduced measures that illustrate their respective goals and approaches to regulating AI, with the AI regimes in Canada and Brazil appearing to be modeled substantially on the EU AI Act.

In this blog post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.

Continue reading

Colorado Draft AI Insurance Rules Are a Watershed for AI Governance Regulation

by Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, Anna Gressel, Sam Allaman, Melissa Muse, and Jackie Dorward

Photos of the authors

(Photos courtesy of Debevoise & Plimpton LLP) From top left to right: Eric Dinallo, Avi Gesser, Erez Liebermann, and Marshal Bozzo; From bottom left to right: Anna Gressel, Sam Allaman, and Melissa Muse 

On February 1, 2023, the Colorado Division of Insurance (“DOI”) released its draft Algorithm and Predicative Model Governance Regulation (the “Draft AI Regulation”). The Draft AI Regulation imposes requirements on Colorado-licensed life insurance companies that use external data and AI systems in insurance practices. This release follows months of highly active engagement between the DOI and industry stakeholders, resulting in a first-in-the-nation set of AI and Big Data governance rules that will influence state, federal and international AI regulations for many years to come.

Continue reading

Regulators Should Treat AI Like Employees to Avoid Stifling Innovation

by Avi Gesser, Jehan A. Patterson, Tricia Bozyk Sherno, Frank Colleluori, and Anna R. Gressel

We recently wrote about how rights-based regulatory regimes for artificial intelligence (as opposed to risk-based frameworks) can lead to a misallocation of resources because compliance will require too much effort on low-risk AI (e.g., spam filters, graphics generation for games, inventory management, etc.) and not enough effort on AI that can actually pose a high risk of harm to consumers or the public (e.g., hiring, lending, underwriting, etc.). In this follow-up blog post, we discuss why regulators should view AI risk the same way as employee risk for large companies, and accordingly adopt risk-based regulatory frameworks for AI.

Continue reading