Author Archives: lg3873

The CFIUS Colossus: CFIUS’s Expanding Authority Changes the Risk Calculus for M&A Transactions

by Stephenie Gosnell Handler, Michelle Weinbaum, Mason Gauch, and Chris Mullen

Photos of the authors

Left to right: Stephenie Gosnell Handler, Mason Gauch, and Chris Mullen. (Photos courtesy of Gibson Dunn & Crutcher LLP)

A new final rule from the U.S. Department of the Treasury will expand CFIUS’s authority to request information from parties related to a transaction, increases potential penalty amounts, and expedites mitigation agreement negotiations in certain situations. With the exception of modifying the time frame within which parties are required to respond to mitigation agreement proposals, CFIUS largely adopted the language of its April 2024 proposed rule.

On November 18, 2024, the U.S. Department of the Treasury (“Treasury”), as Chair of the Committee on Foreign Investment in the United States (“CFIUS” or “the Committee”) issued a final rule largely codifying a rule proposed in April 2024, with only a handful of small, yet meaningful, changes. As noted in the accompanying press release, the final rule: Continue reading

The Changing Approach in Compliance in the Tech Sector

by Florencia Marotta-Wurgler

Photo of author

Photo courtesy of author

Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.

For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy.  This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.

Continue reading