by Megan Bannigan, Avi Gesser, Henry Lebowitz, Benjamin Leb, Jarrett Lewis, Melissa Muse, Michael R. Roberts, and Lex Gaillard
Last month, we wrote about how many companies probably need a policy for Generative AI tools like ChatGPT, Bard and Claude (which we collectively refer to as “ChatGPT”). We discussed how employees were using ChatGPT for work (e.g., for fact-checking, first drafts, editing documents, generating ideas and coding) and the various risks of allowing all employees at a company to use ChatGPT without any restrictions (e.g., quality control, contractual, privacy, consumer protection, intellectual property, and vendor management risks). We then provided some suggestions for ways that companies could reduce these risks, including having a ChatGPT policy that organizes ChatGPT use cases into three categories: (1) uses that are prohibited; (2) uses that are permitted with some restrictions, such as labeling, training, and monitoring; and (3) uses that are generally permitted without any restrictions.