Tag Archives: Anna Gressel

Overview of Global AI Regulatory Developments and Some Tips to Reduce Risk

by Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, Samuel Allaman, Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace

Photos of the authors

Top row from left to right: Avi Gesser, Matt Kelly, Anna Gressel, Corey Goldstein, and Samuel Allaman
Bottom row from left to right: Michael Pizzi, Jackie Dorward, Lex Gaillard, and Ned Terrace (photos courtesy of Debevoise & Plimpton LLP)

With last week’s political deal in European Parliament to advance the European Union’s groundbreaking AI Act (the “EU AI Act”), Europe is one step closer to enacting the world’s first comprehensive AI regulatory framework. Yet while the EU is poised to become the first jurisdiction to take this step, other countries are not far behind. In recent months, the U.S., Canada, Brazil, and China have all introduced measures that illustrate their respective goals and approaches to regulating AI, with the AI regimes in Canada and Brazil appearing to be modeled substantially on the EU AI Act.

In this blog post, we provide an overview of these legislative developments, highlighting key similarities, differences and trends between each country’s approach as well as providing a few considerations for companies deploying significant AI systems.

Continue reading

NYC’s AI Hiring Law Is Now Final and Effective July 5, 2023

by Avi Gesser, Anna Gressel, Jyotin Hamid, Tricia Bozyk Sherno, and Basil Fawaz

Photos of the authors

From left to right: Avi Gesser, Anna Gressel, Jyotin Hamid, and Tricia Bozyk Sherno (Photos courtesy of Debevoise & Plimpton LLP)

The New York City Department of Consumer and Worker Protection (the “DCWP”) has adopted final rules (the “Final Rules”) regulating the use of artificial intelligence for hiring practices. The DCWP’s Automated Employment Decision Tool Law (the “AEDT Law” or the “Law”) requires covered employers to conduct annual independent bias audits and to post public summaries of those results. To recap, the DCWP released an initial set of proposed rules on September 23, 2022, and held a public hearing on November 4, 2022. Due to the high volume of comments expressing concern over the Law’s lack of clarity, the DCWP issued a revised set of proposed rules on December 23, 2022, and held a second public hearing on January 23, 2023. After issuing the Final Rules, the DCWP delayed enforcement of the Law for the second time from April 15, 2023 to July 5, 2023.

Continue reading

Colorado Draft AI Insurance Rules Are a Watershed for AI Governance Regulation

by Eric Dinallo, Avi Gesser, Erez Liebermann, Marshal Bozzo, Anna Gressel, Sam Allaman, Melissa Muse, and Jackie Dorward

Photos of the authors

(Photos courtesy of Debevoise & Plimpton LLP) From top left to right: Eric Dinallo, Avi Gesser, Erez Liebermann, and Marshal Bozzo; From bottom left to right: Anna Gressel, Sam Allaman, and Melissa Muse 

On February 1, 2023, the Colorado Division of Insurance (“DOI”) released its draft Algorithm and Predicative Model Governance Regulation (the “Draft AI Regulation”). The Draft AI Regulation imposes requirements on Colorado-licensed life insurance companies that use external data and AI systems in insurance practices. This release follows months of highly active engagement between the DOI and industry stakeholders, resulting in a first-in-the-nation set of AI and Big Data governance rules that will influence state, federal and international AI regulations for many years to come.

Continue reading

Does Your Company Need a ChatGPT Policy? Probably.

by Megan Bannigan, Avi Gesser, Henry Lebowitz, Anna Gressel, Michael R. Roberts, Melissa Muse, Benjamin Leb, Jarrett Lewis, Lex Gaillard, and ChatGPT

Photos of the authors

Top row left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Anna Gressel
Bottom row left to right: Michael R. Roberts, Melissa Muse, Benjamin Leb, and Jarrett Lewis

ChatGPT is an AI language model developed by OpenAI that was released to the public in November 2022 and already has millions of users. While most people were initially using the publicly available version of ChatGPT for personal tasks (e.g., generating recipes, poems, workout routines, etc.) many have started to use it for work-related projects. In this Debevoise Data Blog post, we discuss how people are using ChatGPT at their jobs, what are the associated risks, and what policies companies should consider implementing to reduce those risks.

Continue reading

Legal Risks of Using AI Voice Analytics for Customer Service

by Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

Photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

There is a growing trend among customer-facing businesses towards using artificial intelligence (“AI”) to analyze voice data on customer calls. Companies are using these tools for various purposes including identity verification, targeted marketing, fraud detection, cost savings, and improved customer service. For example, AI voice analytics can detect whether a customer is very upset, and therefore should be promptly connected with an experienced customer service representative, or whether the person on the phone is not really the person they purport to be. These tools can also be used to assist customer service representatives in deescalating calls with upset customers by making real-time suggestions of phrases to use that only the customer service representative can hear, as well as evaluate the employee’s performance in dealing with a difficult customer (e.g., did the employee raise her voice, did she manage to get the customer to stop raising his voice, etc.).

Some of the more novel and controversial uses for AI voice analytics in customer service include (1) detecting whether a customer is being dishonest, (2) inferring a customer’s race, gender, or ethnicity, and (3) assessing when certain kinds of customers with particular concerns purchase certain goods or services, and developing a corresponding targeted marketing strategy.  

Continue reading

The Digital Services Act (DSA) Transforms Regulation of Online Intermediaries

by Avi Gesser, Anna Gressel, and Michael Pizzi

On July 5, 2022, the European Parliament voted to approve the final text of the Digital Services Act (“DSA” or the “Act”), a landmark regulation that—along with its sister regulation, the Digital Markets Act (“DMA”)—is poised to transform the global regulatory landscape for social media platforms, hosting services like cloud service providers, and other online intermediaries.

Lawmakers have billed the DSA as implementing the principle that “what is illegal offline, should be illegal online.” In reality, the DSA goes much further, requiring online platforms to not only take greater accountability for “illegal” and “harmful” content that they host, but also to provide unprecedented transparency around their content moderation practices, targeted advertising, and recommender algorithms, and to maintain comprehensive risk management systems for a potentially wide range of systemic risks—from public health crises to political misinformation.

In this Debevoise Data Blog post, we have provided an update on the status of the DSA, an overview of the key features of this landmark regulation, and several take-aways for companies about the import of the DSA.

Continue reading

Complying with New York’s AI Employment Law and Similar Regulations

by Avi Gesser, Jyotin Hamid, Tricia Bozyk Sherno, Anna Gressel, Scott M. Caravello, and Rachel Tennell

A growing number of employers are turning to artificial intelligence (“AI”) tools to assist  500 companies use talent-sifting software, and more than half of human resource leaders in the U.S. leverage predictive algorithms to support hiring. Widespread adoption of these tools has led to concerns from regulators and legislators that they may be inadvertently discriminating, for example, by:

  • Penalizing job candidates with gaps in their resumes, leading to a bias against older women who have taken time off for childcare;
  • Recommending candidates for interviews who resemble the company’s current leadership, which is not diverse; or
  • Using automated games that are unfairly difficult for individuals with disabilities to evaluate employees for promotions, even though they could do the job with a reasonable accommodation.

Continue reading

New Automated Decision-Making Laws: Four Tips for Compliance

by Avi Gesser, Robert Maddox, Anna Gressel, Mengyi Xu, Samuel Allaman, Andres Gutierrez

With the widespread adoption of artificial intelligence (“AI”) and other complex algorithms across industries, many business decisions that used to be made by humans are now being made (either solely or primarily) by algorithms or models. Examples of automated decision-making (“ADM”) include determining:

  • Who gets an interview, a job, a promotion, or employment discipline;
  • Which ads get displayed for a user on a website or a social media feed;
  • Whether someone’s credit application should be approved, and at what interest rate;
  • Which investments should be made;
  • When a car should break or swerve to stay in a lane;
  • Which emails are spam and should not be read; and
  • Which transactions should be flagged or blocked as possibly fraudulent, money laundering, or in violation of sanctions regulations.

Continue reading