Tag Archives: Jarrett Lewis

SEC Proposes Rule to Eliminate or Neutralize Conflicts in the Use of “Predictive Data Analytics” Technologies

by Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Jeff Robins, Matt Kelly, Gary E. Murphy, Jarrett Lewis, Robert B. Kaplan, Marc Ponchione, Sheena Paul, Catherine Morrison, Julie M. Riewe, Kristin A. Snyder, and Mengyi Xu

Photos of the authors

Top left to right: Andrew J. Ceresney, Charu A. Chandrasekhar, Avi Gesser, Jeff Robins, Matt Kelly, Gary E. Murphy, and Jarrett Lewis.
Bottom left to right: Robert B. Kaplan, Marc Ponchione, Sheena Paul, Catherine Morrison, Julie M. Riewe, Kristin A. Snyder, and Mengyi Xu.
(Photos courtesy of Debevoise & Plimpton LLP)

On July 26, 2023, the U.S. Securities and Exchange Commission (“SEC”) issued proposed rules (the “Proposed Rules”) that would require broker-dealers and investment advisers (collectively, “firms”) to evaluate their use of predictive data analytics (“PDA”) and other covered technologies in connection with investor interactions and to eliminate or neutralize certain conflicts of interest associated with such use. The Proposed Rules also contain amendments to rules under the Securities Exchange Act of 1934[1] (“Exchange Act”) and the Investment Advisers Act of 1940[2] (“Advisers Act”) that would require firms to have policies and procedures to achieve compliance with the rules and to make and maintain related records.

In this memorandum, we first discuss the scope of the Proposed Rules and provide a summary of key provisions. We also discuss some key implications regarding the scope and application of the rules if adopted as proposed. The full text of the proposal is available here.

Continue reading

Does Your Company Need a ChatGPT Pilot Program? Probably.

by , , and

Photos of the authors

Top row from left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Benjamin Leb
Bottom row from left to right: Jarrett Lewis, Melissa Muse, Michael R. Roberts, and Lex Gaillard
(Photos courtesy of Debevoise & Plimpton LLP)

Last month, we wrote about how many companies probably need a policy for Generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “ChatGPT”). We discussed how employees were using ChatGPT for work (e.g., for fact-checking, first drafts, editing documents, generating ideas and coding) and the various risks of allowing all employees at a company to use ChatGPT without any restrictions (e.g., quality control, contractual, privacy, consumer protection, intellectual property, and vendor management risks). We then provided some suggestions for ways that companies could reduce these risks, including having a ChatGPT policy that organizes ChatGPT use cases into three categories: (1) uses that are prohibited; (2) uses that are permitted with some restrictions, such as labeling, training, and monitoring; and (3) uses that are generally permitted without any restrictions.

Continue reading

Does Your Company Need a ChatGPT Policy? Probably.

by Megan Bannigan, Avi Gesser, Henry Lebowitz, Anna Gressel, Michael R. Roberts, Melissa Muse, Benjamin Leb, Jarrett Lewis, Lex Gaillard, and ChatGPT

Photos of the authors

Top row left to right: Megan Bannigan, Avi Gesser, Henry Lebowitz, and Anna Gressel
Bottom row left to right: Michael R. Roberts, Melissa Muse, Benjamin Leb, and Jarrett Lewis

ChatGPT is an AI language model developed by OpenAI that was released to the public in November 2022 and already has millions of users. While most people were initially using the publicly available version of ChatGPT for personal tasks (e.g., generating recipes, poems, workout routines, etc.) many have started to use it for work-related projects. In this Debevoise Data Blog post, we discuss how people are using ChatGPT at their jobs, what are the associated risks, and what policies companies should consider implementing to reduce those risks.

Continue reading