Tag Archives: Anna R. Gressel

Three Takeaways from the IOSCO Report to Securities Regulators on Artificial Intelligence

by Avi Gesser, Anna R. Gressel, and Mengyi Xu

On September 7, 2021, the Board of the International Organization of Securities Commissions (“IOSCO”) issued a final report (PDF: 446 KB) entitled “The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers” (the “Report”), which aims to assist IOSCO members in supervising their regulated entities over the use of AI and ML.

While non-binding, the Report is likely to serve at least as a key frame of reference—if not as a benchmark—for the development of more tailored supervisory approaches by securities regulators around the globe. While the concepts in the Report are not new, they reflect an acknowledgement that existing regulations may not be sufficient to mitigate the wide variety of AI-risks, and that new and tailored regulations targeting asset managers and market intermediaries’ use of AI may be needed.

Continue reading

The Future of AI Regulation: 24 Ways That Companies Can Reduce Their Regulatory and Reputational AI Risks

by Avi Gesser, Anna R. Gressel, and Tara Raam

This post is Part V of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions, click here. For Part II, outlining key features of the EU’s draft AI legislation, click here. For Part III, discussing new obligations for companies under the EU’s draft AI legislation, click here. For Part IV, discussing a recent FTC blog post on companies’ use of AI, click here

In this final post, we have taken those important developments in AI regulation, along with some other recently issued guidance, and prepared a list of 24 measures that companies can adopt now to prepare for the coming AI regulatory landscape, which is an update to a post we wrote last year on this same topic.

Continue reading

The Future of AI Regulation: The FTC’s New Guidance on Using AI Truthfully, Fairly, and Equitably

by Avi Gesser, Anna R. Gressel, and Parker C. Eudy

This post is Part IV of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here. For Part II, outlining key features of the EU’s draft AI legislation, click here. For Part III, discussing new obligations for companies under the EU’s draft AI legislation, click here.

In this installment, we discuss the Federal Trade Commission’s (“FTC”) recent blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI,” which was released on April 19, 2021.

Continue reading

The Future of AI Regulation: Draft Legislation from the European Commission Shows the Coming AI Legal Landscape

by Avi Gesser, Anna R. Gressel, and Steven Tegrar

This post is Part III of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here. For Part II, outlining key features of the EU’s draft AI legislation discussed further in this Part, click here.   Continue reading

The Future of AI Regulation: Draft Legislation from the European Commission Shows the Coming AI Legal Landscape

by Avi Gesser, Anna R. Gressel, and Steven Tegrar

This post is Part II of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here.

On April 21, 2021, the European Commission published its highly anticipated draft legislation governing the use of AI, which is being referred to as the “GDPR of AI” because, if enacted, it would place potentially onerous compliance obligations on a wide spectrum of companies using AI systems. The commission proposes to regulate AI based on the potential risk posed by its intended use: AI systems that pose an “unacceptable risk” would be banned outright; AI classified as “high risk” would be subject to stringent regulatory and disclosure requirements; and certain interactive, deepfake, and emotion recognition systems would be subject to heightened transparency obligations.

Continue reading

The Future of AI Regulation: The RFI on AI from U.S. Banking Regulators

by Avi Gesser, Anna R. Gressel, and Amy Aixi Zhang

This post is Part I of a five-part series by the authors on The Future of AI Regulation.

Several recent developments provide new insight into the future of artificial intelligence (“AI”) regulation. First, on March 29, 2021, five U.S. federal regulators published a request for information (“RFI”) seeking comments on the use of AI by financial institutions. Second, on April 19, the FTC issued a document entitledAiming for truth, fairness, and equity in your company’s use of AI,” which provides seven lessons on what the FTC views as responsible AI use. Third, on April 21, the European Commission released their much-anticipated draft regulation on AI, which is widely viewed as the first step in establishing a GDPR-like comprehensive EU law on automated decision making. In this series on the future of AI regulation, we will examine each of these developments,  what they mean for the future of AI regulation, and what companies can do now to prepare for the coming AI regulatory landscape.

Continue reading

Tips for Creating a Sensible Cybersecurity and AI Risk Framework for Critical Vendors

by Avi Gesser, Anna Gressel, Zila Reyes Acosta-Grimes, and Michael Bloom

Companies face increasing cybersecurity and AI risk from third-party vendors. Cybersecurity risks arise when companies share sensitive personal data or company information with their vendors or when their vendors have direct access to the company’s information systems. Companies using AI technology that is developed by a vendor can also face risk if the AI behaves unexpectedly, and that results in negative impacts including on critical business operations. In recognition of these kinds of third-party data risks, on October 30, 2020, federal banking agencies—including the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency (“OCC”) and the Federal Deposit Insurance Corporation (“FDIC”)—released a joint paper (the “Joint Paper”) outlining sound practices designed to help banks increase operational resilience.

Continue reading

Destruction Emerges as a Powerful Enforcement Measure for AI: FTC Requires Company to Delete Models Trained with Improperly Utilized Consumer Data

by Jeremy Feigelson, Avi Gesser, Jim Pastore, Justin C. Ferrone, Anna R. Gressel, Paul D. Rubin, and Melissa Runsten

For those following emerging artificial intelligence (“AI”) regulations and enforcement closely, one issue of great interest is remedies. In particular: in what circumstances, if any, would regulators or courts find that a flawed machine learning or AI model must be scrapped entirely? A hot-off-the-press decision from the U.S. Federal Trade Commission (the “FTC”) suggests regulators will not shy away from saying “scrap it.”

Continue reading

The UK School Algorithm Debacle: Five Lessons for Corporate AI Programs

by Avi Gesser, Anna R. Gressel, and Robin Lööf

The widespread criticism, and partial abandonment, of the algorithm that was used to evaluate UK students serves as useful reminder that corporate AI programs carry significant regulatory and reputational risks, and that careful planning, testing and governance are needed throughout the process to mitigate those risks.

Continue reading

Schrems II: Privacy Shield Invalid and Severe Challenges for Standard Contractual Clauses

by , and  

Yesterday, the Court of Justice of the European Union (CJEU), the EU’s highest court, invalidated the EU-U.S. Privacy Shield for cross-border transfers of personal data.  The CJEU’s decision also cast significant doubts over whether companies can continue to use the European Commission-approved Standard Contractual Clauses (SCCs) to transfer EU personal data to the U.S., or to other jurisdictions with similarly broad surveillance regimes.  The CJEU’s lengthy decision is here and its short-form press release is here (PDF: 319.62 KB).

What does this mean for organizations that rely on Privacy Shield or SCCs?  History suggests that privacy enforcement authorities in the EU may hold their fire while efforts are made to come up with a replacement system for data transfers.  EU authorities hopefully will clarify their enforcement intentions soon.  In any event, organizations that have relied on Privacy Shield will have to turn immediately to considering what practical alternatives they might adopt.  U.S. government authorities will also have to turn to the knotty question of what data transfer mechanisms might ever satisfy the CJEU, given persistent EU concerns about U.S. government surveillance of personal data.

Continue reading