On November 17, 2023, the NYU Law Program on Corporate Compliance and Enforcement (PCCE) hosted a standing-room-only full-day conference on Security, Privacy, and Consumer Protection. The conference addressed issues such as managing effective cybersecurity and privacy compliance programs, the use of “dark patterns” to manipulate consumer choices, whether privacy regulation and enforcement actions actually prompt firms to update their privacy policies, and the new amendments to the New York Department of Financial Services cybersecurity rules. A full agenda of the conference, along with speaker bios, is available here. In this post, several participants from the panel on Managing an Effective Privacy Program in a Time of Increasing Regulatory and Legal Risk share further thoughts on the issue.
Privacy Challenges Arising from AI Adoption
by Avi Gesser (Panel Moderator)
The adoption of AI presents several privacy-related challenges for companies. First, many AI projects involve large data sets, which are used in the training or fine tuning of the AI models, or in operating them. These large data sets often include personal information of individuals, which may only be incidental to the value of the data. For example, an AI model used for generative investment ideas may be trained on a large volume of investment-related documents that include the personal information of investors, even though that personal information is not needed to train the model. The use of personal information when training, fine tuning, or operating AI may trigger specific privacy disclosures and/or data protection impact assessments, and when personal information is shared with the generative AI provider, AI consultants, cloud providers, or other third parties, specific contractual provisions may also be required. Accordingly, AI development teams should be careful to look for personal information in the data sets that they are using to train or operate their AI models and, to the extent that data is not needed, de-identify it consistent with privacy laws or delete it. If it is needed, then steps should be taken to ensure that applicable privacy obligations are met.
Second, the adoption of AI can create several risks that are adjacent to privacy concerns, but are not necessarily core privacy obligations. For example, a particular AI use case may create risks relating to bias, transparency, explainability, confidentiality, contractual compliance with use-of-data limitations, cybersecurity, quality control, and vendor management, in addition to any privacy risks. As a result, privacy professionals asked to review an AI tool may be uncertain which, if any, of these additional risks they are being asked to identify and evaluate. It there therefore important that privacy professionals are clear as to which risks they are and are not providing advice on, but are also flagging any non-privacy risks they see that are associated with an AI use case under review, to make sure that an appropriate subject-matter expert is aware of these other risks and is addressing them.
Avi Gesser is a Partner and Co-Chair of the Data Strategy & Security Group at Debevoise & Plimpton LLP. Previously, he was a federal prosecutor at the U.S. Department of Justice.
Escalation of Cybersecurity and Privacy Matters
It can be challenging to determine which cybersecurity and privacy matters merit senior-level attention. To begin with, the legal landscape is constantly changing, with new laws and regulation coming into effect and increasing enforcement each year. In the U.S. alone, in 2023, new comprehensive data privacy laws came into effect in four states, the SEC promulgated new cybersecurity disclosure rules and sued SolarWinds and its Chief Information Security Officer based on novel and expansive legal theories concerning cybersecurity, and the New York Department of Financial Services imposed heightened obligations on covered entities under its Cybersecurity Regulation, including to require the Chief Executive Officer to certify the company’s compliance each year. At the same time, companies’ technology and security risks and practices, and the ways in which they collect, use, and store data, are constantly evolving, as are malicious actors’ techniques for causing harm. Against this backdrop, breaches may be more or less extensive than they initially appear, including in terms of reputational and other consequences that can be difficult to assess in real time.
To address these challenges, it is important for legal, privacy and information security personnel to be aligned on a process for escalating incidents and risks to senior management, and the board should understand how these matters will be escalated to it. Senior management and the board should be briefed periodically on incidents including, at a minimum, those involving notification to regulators or affected third parties. There should also be a mechanism for more immediate escalation of certain incidents, including those which (for public companies) could be material and require disclosure on Form 8-K under the SEC’s new rules.
The process of identifying and escalating cybersecurity and privacy risks, apart from incidents, is even more challenging. Risks are inherently more difficult to identify and assess than acute incidents, particularly when they involve complex information systems and technology. Further, while periodic reporting on risks is critical, it is not enough: Senior management and business leaders need to incorporate cybersecurity and privacy considerations into relevant business decisions. This can include decisions regarding new products, new technology the company may use, new jurisdictions in which the company may operate, and new marketing and consumer-facing strategies, among other matters. The board, equally, should set the expectation that management will carry out this responsibility, and understand how it does so. Ultimately, the process requires not only alignment among legal, privacy, and information security personnel, but a commitment by senior management and business leaders to seek early input from, and partner with, these personnel as they carry out the company’s business.
Nicole Friedlander is a Partner and Co-Head of the Cybersecurity Practice at Sullivan & Cromwell LLP. Previously she was a federal prosecutor and Chief of the Complex Frauds and Cybercrime Unit at the U.S. Attorney’s Office for the Southern District of New York.
What are the key elements / focus areas for an effective privacy program in 2024?
by Judy Titera
2023 brought a whirlwind of privacy-related activity, including emerging and rapidly advancing technologies, new international and U.S. state laws and regulations, extensive fines and enforcement, and more. Each of these has a meaningful impact on privacy programs, and it can be confusing or overwhelming to keep up with the pace of change and maintain up-to-date privacy program elements. It is therefore beneficial to take time to pause, assess your current program strengths and opportunities, and identify the strategic decisions and actions necessary to build or maintain program maturity.
Looking to 2024 and beyond, consider how your program measures in these four critical areas:
1. Privacy Policy, Standards and Procedures: An annual review of all internal privacy policies, standards, and procedures should be part of your normal course of action. For 2024, verify that publicly facing privacy notices are reviewed in depth to ensure both compliance with state, federal, and international requirements and also that you are doing what you say you are doing. Keep in mind that AI can be easily used by regulators and litigators to scan and review your notices.
2. Incident Response Program: Recently and increasingly, regulatory agencies are focusing on requirements of timely reporting on data and security events. It is critical to review and refine your incident response plan with a focus on enterprise collaboration, escalation paths, timeliness, and clearly defined roles and responsibilities.
3. Operational Integration / Privacy by Design: Your current program should be integrated between and within the various teams and operational areas of your business. Understanding the organization’s strategy and practices on technology development and innovation—particularly around AI—is critical. Furthermore, having a seat at the table from the beginning allows you to help frame a compliant and ethical approach to technology and data use, greatly enhancing your business’ chances for success.
4. Monitoring / Training: Privacy teams need to help support the organization through this changing and challenging time. Check the systems and metrics you use for monitoring program effectiveness, and design training / awareness initiatives as recurring efforts rather than one-off events. Educating yourself and your team while providing training and expertise to your organization on the foundational aspects of privacy and the risks of new technologies should be an ongoing process.
While maturity is measured in the comprehensiveness and robustness of the overall privacy program, and attention must be paid to each of the various aspects, starting with a focus on these key elements will help provide an effective position for your program and organization to move forward in 2024.
Judy Titera is the former Chief Privacy Officer at USAA. Currently, she is an Independent Board Director at MS Transverse and a Strategic Advisor at RadarFirst.
How do privacy incidents at companies come to the attention of regulators?
by James Haldin
Regulators are learning about privacy incidents with greater frequency through a variety of channels. Mandatory disclosure obligations are a common feature of numerous data protection laws and regulations as well as the cybersecurity rules adopted by the SEC earlier this year. Mandatory disclosure obligations may also be imposed upon companies via consent decrees or other enforcement action resolutions. Additionally, companies sometimes disclose privacy incidents to regulators on a voluntary basis for various prudential reasons, including the likelihood that regulators may otherwise learn about those incidents through other channels. In voluntary disclosure scenarios, the company may have increased flexibility with respect to the timing of the disclosure and the framing of the key facts, remedial efforts and potential legal implications of the incidents. The risks associated with voluntary disclosures, however, can be significant, and must be carefully weighed against the potential benefits based on the individual facts and circumstances. Beyond direct reporting from companies, regulators may learn about privacy incidents from consumers, whistleblowers and media reporting, particularly investigative journalists that focus on data and technology issues. Auditors and independent assessors (monitors) imposed via consent decrees may also become disclosure sources.
Regulators at both the federal and state level are also increasingly proactive in investigating potential privacy incidents or other forms of control weaknesses via standard means, including subpoenas, civil investigative demands or informal requests for documents and information. The Federal Trade Commission and state attorneys general often rely upon laws governing unfair and deceptive trade practices to investigate privacy incidents, particularly in states that have not enacted comprehensive privacy legislation. In California and other states with privacy-focused statutory regimes, regulators and enforcement authorities are frequently eager to utilize their new authority to not only investigate potential wrongdoing but also to understand how, if at all, companies have adjusted their data practices in response to new state laws. For example, the California Attorney General announced an enforcement “sweep” in July 2023 aimed at ensuring compliance with the California Consumer Privacy Act (CCPA) with respect to the personal information of employees and job applicants. Earlier in the year, the California Attorney General announced a “sweep” aimed at businesses with mobile apps in the retail, travel, and food service industries that allegedly failed to comply with consumer opt-out requests to prevent the sale of their data. Given the continued proliferation of privacy-related laws and regulations at the state level and an increased focus on enforcement, we expect the trend towards broad-based “sweeps” that focus on particular industries or practices to continue.
James Haldin is a Partner in the Cybersecurity & Privacy practice group at Davis Polk & Wardwell LLP.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).