Defense Department Unveils Final Rule for CMMC 2.0 Program: The Time Is Now for Defense Contractors To Get Compliant

by Beth Burgin Waller, Anthony Mazzeo, and Patrick Austin

Photos of the authors

Left to right: Beth Burgin Waller, Anthony Mazzeo, and Patrick Austin. (photos courtesy of authors)

If you work for a defense contractor or subcontractor responsible for handling controlled unclassified information (CUI) and/or federal contract information (FCI), the U.S. Department of Defense posted the final rule for the highly anticipated Cybersecurity Maturity Model Certification 2.0 program (CMMC 2.0 or the Final Rule).  Issuance of the Final Rule (full text available here in PDF format) likely means DoD will begin implementing new, stringent cybersecurity standards for defense contractors at some point in early-to-mid 2025.   

Continue reading

Trust, But Verify…Therein Lies the Rub: A Fresh Look at Audits of Export Controls Compliance Programs

by Brent Carlson and Michael Huneke

Photos of the authors

Left to right: Brent Carlson and Michael Huneke (Photos courtesy of the authors)

Export controls have risen to a top corporate compliance priority in recent years, and now even pose enterprise risk for many companies.[1] The combination of new rules and enforcement signals from the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) and increasing bipartisan congressional scrutiny, means that in-house legal and compliance teams face enormous challenges. New, innovative tools and techniques are necessary to stay ahead of the game, and this includes making upgrades to keep a company’s audits effective.

Continue reading

Long-Awaited U.S. Outbound Investment Regime Published, Will Become Effective January 2, 2025

by Chase Kaniecki, Samuel H. Chang, B.J. Altvater, and Ryan Brown

Photos of the authors

Left to right: Chase Kaniecki, Samuel H. Chang, B.J. Altvater, and Ryan Brown (Photos courtesy of Cleary Gottlieb Steen & Hamilton LLP)

On October 28, 2024, the U.S. Department of the Treasury (“Treasury”) issued a long-awaited Final Rule (the “Final Rule”) implementing the U.S. Outbound Investment Security Program (the “Program”).[1]  Under the Program, effective January 2, 2025, U.S. persons will be prohibited from engaging in, or required to notify Treasury regarding, a broad range of transactions involving entities engaged in certain activities relating to semiconductors and microelectronics, quantum information technologies, and artificial intelligence (“AI”) systems in “countries of concern” (presently limited to China, Hong Kong, and Macau). 

Continue reading

U.S. Attorney Office “Whistleblower” Programs Sow Confusion and Pose Risks to Corporate Whistleblowers

by David Colapinto and Geoff Schweller

Photos of authors

Left to right: David Colapinto and Geoff Schweller.(Photos courtesy of Kohn, Kohn & Colapinto LLP)

In recent weeks, a number of U.S. Attorneys’ Offices (USAOs) across the country have rolled out “Whistleblower Pilot Programs” offering the potential of non-prosecution agreements in exchange for voluntary self-disclosure of criminal conduct by participants in non-violent offenses. These “whistleblower” programs, announced within the same timeframe as the Department of Justice’s new Corporate Whistleblower Awards Pilot Program, can sow confusion among would-be-whistleblowers as well as attorneys and pose significant risks to corporate informants as these Pilot Programs differ greatly from other well-known corporate whistleblower programs, such as the Securities and Exchange Commission (SEC) Whistleblower Program.

Continue reading

Irish Regulator Fines LinkedIn 310 Million Euros for GDPR Violations

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

Left to right: David Dumont and Tiago Sérgio Cabral (Photos courtesy of the authors)

On October 24, 2024, the Irish Data Protection Commission (the “DPC”) announced that it had issued a fine of €310 million (approx. $335 million) against LinkedIn Ireland Unlimited Company (“LinkedIn”) for breaches of the EU General Data Protection Regulation (“GDPR”) related to transparency, fairness, and lawfulness in the context of the company’s processing of its users’ personal data for behavioral analysis and targeted advertising. In addition to the fine, the DPC also issued a reprimand and an order to bring processing into compliance.  

Continue reading

Managing Cybersecurity Risks Arising from AI — New Guidance from the NYDFS

by Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, Erez Liebermann, Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu.

Photos of the authors

Top left to right: Charu A. Chandrasekhar, Luke Dembosky, Avi Gesser, and Erez Liebermann. 
Bottom left to right: Marshal Bozzo, Johanna Skrzypczyk, Ned Terrace, and Mengyi Xu. (Photos courtesy of Debevoise & Plimpton LLP)

On October 16, 2024, the New York Department of Financial Services (the “NYDFS”) issued an Industry Letter providing guidance on assessing cybersecurity risks associated with the use of AI (the “Guidance”) under the existing 23 NYCRR Part 500 (“Part 500” or “Cybersecurity Regulation”) framework. The Guidance applies to entities that are covered by Part 500 (i.e., entities with a license under the New York Banking Law, Insurance Law or Financial Services Law), but it provides valuable direction to all companies for managing the new cybersecurity risks associated with AI.

The NYDFS makes clear that the Guidance does not impose any new requirements beyond those already contained in the Cybersecurity Regulation. Instead, the Guidance is meant to explain how covered entities should use the Part 500 framework to address cybersecurity risks associated with AI and build controls to mitigate such risks. It also encourages companies to explore the potential cybersecurity benefits from integrating AI into cybersecurity tools (e.g., reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats). Entities that are covered by Part 500, especially those that have deployed AI in significant ways, should review the Guidance carefully, along with their current cybersecurity policies and controls, to see if any enhancements are appropriate.

Continue reading

From Art to Science: Unveiling the Transformative Power of AI in Surveys and Interviews

by Madison Leonard, Michael Costa, and Jonny Frank

Left to right: Madison Leonard, Michael Costa and Jonny Frank. (Photos courtesy of StoneTurn)

Far from mere tools, employee surveys and interviews serve as key indicators of an organization’s overall health and success. They play a pivotal role in assessing corporate culture, gauging satisfaction, gathering feedback, improving retention, and measuring diversity. Surveys can quantify sentiment—often over multiple periods and population segments. Interviews supplement surveys by offering nuance, context, the opportunity to follow up and even multiple perspectives on a single topic. These qualitative responses are essential for identifying subtle concerns, uncovering issues not explicitly covered by standard questions, and understanding employee sentiment. Without these insights, assessments risk missing critical information about how employees feel about governance, compliance and other key topics.

Despite their importance, surveys and interviews come with their own set of challenges. They are resource-intensive and susceptible to human error. The process of choosing survey topics and questions is a delicate one. While quantitative survey results are clear-cut, the qualitative feedback from interviews and focus groups, often in the form of lengthy, nuanced responses, demands significant mental effort to categorize and interpret. The larger the dataset, the higher the risk of confirmation bias, where reviewers may focus on responses that align with their expectations or miss critical patterns in the data. This bias, coupled with the sheer volume of information to be processed, makes it difficult to conduct thorough and objective assessments, especially under time and budget constraints.

Artificial Intelligence (AI), specifically Large Language Models (LLMS), presents a compelling solution to these challenges. By automating both quantitative and qualitative data analysis, AI models can swiftly sift through large datasets, identifying patterns, contradictions, and sentiment across thousands of responses in a fraction of the time it would take a human reviewer. This time-saving aspect of AI is particularly beneficial in today’s fast-paced business environment, where efficiency and productivity are paramount.

In addition to speed, AI helps mitigate the risk of human error and bias. It allows professionals to review qualitative data thoroughly and impartially. Unlike the human eye, a well-trained AI model can efficiently process voluminous and complex text, uncovering insights without overlooking critical information.

But, as powerful as AI may be, human judgment and experience remain critical. Human input is necessary to refine, train and ultimately verify the accuracy of an AI model.

Continue reading

The Changing Approach in Compliance in the Tech Sector

by Florencia Marotta-Wurgler

Photo of author

Photo courtesy of author

Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.

For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy.  This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.

Continue reading

Click to Cancel: The FTC’s Amended Negative Option Rule and What it Means for Your Business

by Julia Solomon Ensor 

Federal Trade Commission

The FTC has long regulated negative options through the Negative Option Rule and strategic enforcement actions. Recently, the FTC built on that work by announcing a set of common-sense revisions to the Negative Option Rule, now known as the Rule Concerning Recurring Subscriptions and Other Negative Option Programs. The revisions are designed to protect people from misleading enrollment tactics, billing practices, and cancellation policies, and provide businesses with clear rules of the road, all consolidated in one place, to help them build customer trust and avoid enforcement action.

Continue reading

The FTC Finalizes Sweeping Changes to HSR Reporting Obligations

by Ilene Knable Gotts, Christina C. Ma, Monica L. Smith and Gray W. Decker

From left to right: Ilene Knable Gotts, Christina C. Ma, Monica L. Smith and Gray W. Decker. (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

On October 10, 2024, the Federal Trade Commission (“FTC”), with the concurrence of the Antitrust Division of the Department of Justice (“DOJ”), announced the FTC’s unanimous vote to adopt a final rule implementing significant changes to the reporting obligations under the Hart-Scott-Rodino Antitrust Improvement Act (“HSR Act”).  Though not as extensive and burdensome as the original proposed changes (see our prior memo analyzing the proposed changes), these changes will increase parties’ filing burden and limit their ability to file quickly, even in non-problematic transactions.  Absent judicial intervention, the final rule will become effective 90 days after it is published in the Federal Register (i.e., approximately mid-January 2025).  The FTC also announced that, once the final rule goes into effect, it will lift the three-and-a-half-year “temporary suspension” of granting early termination of the HSR waiting period in transactions not needing further agency investigation.

Continue reading