Tag Archives: William Savitt

AI in the 2024 Proxy Season: Managing Investor and Regulatory Scrutiny

by William SavittMark F. VeblenKevin S. SchwartzNoah B. YavitzCarmen X. W. Lu, and Courtney D. Hauck

Photos of the authors

Top from left to right: William Savitt, Mark F. Veblen, and Kevin S. Schwartz.
Bottom left to right: Noah B. Yavitz, Carmen X. W. Lu, and Courtney D. Hauck. (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

Corporate disclosures concerning artificial intelligence have increased dramatically in the past year, with Bloomberg reporting that nearly half of S&P 500 companies referenced AI in their most recent annual reports. And some investors are clamoring for even more, using shareholder proposals to press public companies for detailed disclosures concerning AI initiatives, policies, and practices — including, most recently, an Apple shareholder proposal that attracted significant support at a meeting last week. Regulators, meanwhile, have signaled increasing scrutiny of AI-related corporate disclosures, including in a February speech by SEC Chair Gensler cautioning against “AI washing” — the practice of overstating or misstating corporate AI activity. For the 2024 proxy season and beyond, public companies will need to balance the competing demands of regulators and investors, in order to craft effective, responsive strategies for engaging with their stockholders on AI topics. 

Continue reading

FCC Ruling on AI-Facilitated Fraud Illustrates the Need for Forward-Looking Enterprise Risk Management

by William Savitt, Mark F. Veblen, Noah B. Yavitz, and Courtney D. Hauck

From left to right: William Savitt, Mark F. Veblen, Noah B. Yavitz, and Courtney D. Hauck (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

In response to a recent boom in AI-powered robocall scams, the U.S. Federal Communications Commission announced yesterday a Declaratory Ruling confirming that the Telephone Consumer Protection Act, which regulates telemarketing and robocalls, also applies to calls using AI-generated voices. Other federal agencies and state legislatures have similarly moved to police the use and abuse of audio “deepfakes” — in which widely available tools can be used to generate realistic voice simulations from brief recordings. As technology continues to outpace regulation, boards must embrace a proactive approach to risk management, accounting for AI’s capacity to compromise long-standing practices in cybersecurity and internal controls.

Continue reading

Biden Administration Issues Sweeping Executive Order Directing Federal Agencies to Examine and Address Risks of Artificial Intelligence

by William Savitt, Mark F. Veblen, Kevin S. Schwartz, Noah B. Yavitz, and Courtney D. Hauck

Photos of the authors

From left to right: William Savitt, Mark F. Veblen, Kevin S. Schwartz, Noah B. Yavitz, and Courtney D. Hauck (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

On Monday, the Biden Administration issued a long-awaited executive order on artificial intelligence, directing agencies across the federal government to take steps to respond to the rapid expansion in AI technology. The order attempts to fill a gap in national leadership on AI issues, with Congress showing little progress on any comprehensive legislation. The order mandates regulatory action that could affect companies throughout the domestic economy, including: Continue reading

Artificial Intelligence: The New Boardroom Challenge

by William Savitt, Mark F. Veblen, Noah B. Yavitz and Courtney D. Hauck

Photos of the authors

Left to right: William Savitt, Mark F. Veblen, Noah B. Yavitz and Courtney D. Hauck (Photos courtesy of Wachtell, Lipton, Rosen & Katz)

Executives of major U.S. technology companies and labor leaders gathered at the Capitol recently to discuss the regulation of artificial intelligence with ranking members of Congress. On the agenda? Nearly everything — the impact of AI on the future of industrial organization; on the future of work and labor relations; on the future of capitalism and the U.S. economy; and, according to some, on the future of human civilization itself. The gathering was notable, but no longer unusual, as every week brings news of significant developments in AI capabilities and the legal rules that will govern them.

Continue reading

Caremark Exposure – And What to Do About It

by William Savitt

Photograph of William Savitt

William Savitt

2022 set another record for lawsuits faulting boards of directors for failing to adequately oversee corporate operations, a third consecutive year of acceleration. Mounting evidence suggests the trend is here to stay. But here’s some good news: there is much boards and managers can do to anticipate and thereby de-risk this exposure.

Corporate litigation when things go wrong is of course nothing new. When manufactured products prove to be harmful, or services prove defective, or customers are injured, the class action bar has always responded, demanding payment for alleged tort victims. And so after a 2015 listeria outbreak linked to Blue Bell Creameries’ ice cream was linked to three deaths and infections in four states, substantial tort litigation ensued, successfully seeking compensation for the victims from Blue Bell.

Continue reading

Director Liability—“Caremark Protection”

by Martin Lipton and William Savitt

Since the 1996 Caremark (PDF: 1.16 MB) decision, authored by the revered late Chancellor William Allen of the Delaware Court of Chancery, we have called the case to the attention of boards of directors to ease concern about personal liability resulting from derivative litigation claiming a board was negligent in failing to prevent a defective product or otherwise causing or failing to prevent the corporation to be liable for damages to a third party.

 Caremark held that a board would be protected by the business judgment rule so long as it had implemented and monitored a system designed to identify risks and then deal with them.  Emphasizing that it was a doctrine that would only rarely be invoked, Caremark, and cases following it, held that directors could face exposure only if their company “utterly failed” to implement a system for risk identification or if they intentionally “ignored a red flag”—that is, declined to deal with an identified risk. Continue reading