Author Archives: Jason Kelly

The Future of AI Regulation: The FTC’s New Guidance on Using AI Truthfully, Fairly, and Equitably

by Avi Gesser, Anna R. Gressel, and Parker C. Eudy

This post is Part IV of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here. For Part II, outlining key features of the EU’s draft AI legislation, click here. For Part III, discussing new obligations for companies under the EU’s draft AI legislation, click here.

In this installment, we discuss the Federal Trade Commission’s (“FTC”) recent blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI,” which was released on April 19, 2021.

Continue reading

The Future of AI Regulation: Draft Legislation from the European Commission Shows the Coming AI Legal Landscape

by Avi Gesser, Anna R. Gressel, and Steven Tegrar

This post is Part III of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here. For Part II, outlining key features of the EU’s draft AI legislation discussed further in this Part, click here.   Continue reading

The Future of AI Regulation: Draft Legislation from the European Commission Shows the Coming AI Legal Landscape

by Avi Gesser, Anna R. Gressel, and Steven Tegrar

This post is Part II of a five-part series by the authors on The Future of AI Regulation. For Part I, discussing U.S. banking regulators’ recent request for information regarding the use of AI by financial institutions click here.

On April 21, 2021, the European Commission published its highly anticipated draft legislation governing the use of AI, which is being referred to as the “GDPR of AI” because, if enacted, it would place potentially onerous compliance obligations on a wide spectrum of companies using AI systems. The commission proposes to regulate AI based on the potential risk posed by its intended use: AI systems that pose an “unacceptable risk” would be banned outright; AI classified as “high risk” would be subject to stringent regulatory and disclosure requirements; and certain interactive, deepfake, and emotion recognition systems would be subject to heightened transparency obligations.

Continue reading

The Future of AI Regulation: The RFI on AI from U.S. Banking Regulators

by Avi Gesser, Anna R. Gressel, and Amy Aixi Zhang

This post is Part I of a five-part series by the authors on The Future of AI Regulation.

Several recent developments provide new insight into the future of artificial intelligence (“AI”) regulation. First, on March 29, 2021, five U.S. federal regulators published a request for information (“RFI”) seeking comments on the use of AI by financial institutions. Second, on April 19, the FTC issued a document entitledAiming for truth, fairness, and equity in your company’s use of AI,” which provides seven lessons on what the FTC views as responsible AI use. Third, on April 21, the European Commission released their much-anticipated draft regulation on AI, which is widely viewed as the first step in establishing a GDPR-like comprehensive EU law on automated decision making. In this series on the future of AI regulation, we will examine each of these developments,  what they mean for the future of AI regulation, and what companies can do now to prepare for the coming AI regulatory landscape.

Continue reading

Effective Access Controls, Timely Breach Notification, and Other Takeaways from the Latest NYDFS Cyber Resolution

by Luke Dembosky, Jeremy Feigelson, Avi Gesser, Jim Pastore, Johanna Skrzypczyk, Christopher S. Ford, Parker Eudy, and Mengyi Xu

On April 14, 2021, the New York State Department of Financial Services (the “DFS”)  announced that its cyber-enforcement action against National Securities Corporation (“National Securities”) has been resolved by a Consent Order (PDF: 550 KB) that imposes a $3 million penalty. This is the latest step in the DFS’s very active cyber-enforcement agenda. The charges against First American Title Insurance Company are pending with an August 16 hearing date, and last month, the DFS reached its first full cybersecurity resolution with Residential Mortgage Services.

Continue reading

ASIC Releases New Immunity Policy for Market Misconduct Offences

by Olivia Dixon and Jennifer G. Hill

In late February 2021, the Australian Securities and Investments Commission (“ASIC”) released a new policy[1] regarding immunity for a range of offences under Australian corporate law (the “ASIC policy”). The ASIC policy covers offences predominantly falling under the ‘market misconduct’ provisions of Part 7.10 of the Australian Corporations Act 2001 (“the Act”) and includes serious offences, such as market manipulation, insider trading and dishonest conduct in the course of operating a financial services business. The ASIC policy also contemplates criminal immunity being provided for “other Commonwealth offences connected with the Pt 7.10 offence.” Such offences may include ancillary liability offences such as aiding and abetting; breach of director’s duties; false accounting; and money laundering.

The ASIC policy is not entirely novel under Australian law. Its provisions closely resemble an immunity and cooperation policy for cartel conduct, most recently updated in 2019, by another regulator, the Australian Competition & Consumer Commission (the “ACCC policy”).[2]  The ACCC policy offers two forms of leniency for cartel participants who are willing to assist the ACCC in its investigation: (i) immunity:  the first cartel participant to approach the ACCC may be granted conditional immunity from civil enforcement actions, and potentially from criminal actions if it meets the necessary criteria; or (ii) cooperation:  if a cartel participant fails to meet the criteria for conditional immunity, it may still receive leniency from the ACCC or the court if it cooperates in the ACCC’s investigation.

Continue reading

Business Texts on Personal Phones: The Growing Compliance and Enforcement Risk and What to Do About It (Part II of II)

by Margaret W. Meyers, Rachel S. Mechanic, Daniel C. Zinman, David B. Massey, and Shari A. Brandt

This is Part II of a two-part post. For Part I, discussing recent enforcement actions related to employees’ use of personal devices, and the challenges employees’ use of personal devices pose for compliance with books and records and communication supervision rules, click here.

Continue reading

Should Companies Use Machine Learning for Their Anti-Corruption Programs?: The New Coalition for Integrity Guidance

by Shruti Shah and Jonathan J. Rusch

As they work to maintain the effectiveness of their anti-corruption risk and compliance programs, companies must be increasingly attentive to how well they make use of the data they acquire that are relevant to those programs.  The most recent edition of the U.S. Department of Justice’s “Evaluation of Corporate Compliance Programs” document states that prosecutors should inquire into whether compliance and control personnel “have sufficient direct or indirect access to relevant sources of data to allow for timely and effective monitoring and/or testing of policies, controls, and transactions,” and whether “any impediments exist that limit access to relevant sources of data.”[1]

Companies, however, are increasingly awash in such data from a multiplicity of sources: accounts payable, spend data, third-party supplier data, to name just a few.  Many companies make use of rule-based programming, in which human programmers write rules that enable the company to search for and find data indicative of corruption risk.  But some companies are increasingly curious about whether they should use a particular field of artificial intelligence: machine learning, in which computer systems “learn” on their own from data and do not depend on human-written rules.

Continue reading

Business Texts on Personal Phones: The Growing Compliance and Enforcement Risk and What to Do About It (Part I of II)

by Margaret W. Meyers, Rachel S. Mechanic, Daniel C. Zinman, David B. Massey, and Shari A. Brandt

With increasing frequency, securities and commodities regulators are focusing on employees’ use of personal mobile devices for business-related communications via applications that are not approved by employers or captured by employers’ archival systems.  For good reason, regulators believe that many employees are less guarded when texting outside of their surveilled work platforms, particularly among workplace friends and colleagues at other firms, and that some employees may even be doing so to further questionable conduct and evade detection.  Regulators and prosecutors brought waves of cases against financial firms based on messages gathered from persistent multiparty Bloomberg chat rooms, so much so that some big banks shut them down in late 2013.  Text messages on unapproved mobile platforms may well serve as the next goldmine for enforcement staff and prosecutors.    

Continue reading

U.S., EU, U.K., and Other Antitrust Enforcers Enter Collaboration on Antitrust Analysis of Pharma Deals

By D. Jarret Arp, Arthur J. Burke, Ronan P. Harty, Howard Shelanski, and Jesse Solomon

On March 16, 2021, a coalition of international and U.S. antitrust authorities announced their formation of a joint working group to reevaluate their approach to reviewing mergers in the pharmaceutical industry (which today relies largely on an indication-by-indication review of the competitive overlaps between the merging parties).  The issues the working group plans to address are broad and cover theories of harm, analytical methodologies, and remedies.  The formation of this group highlights that pharmaceutical deals will remain a key priority for antitrust agencies—and indicates the potential emergence of more aggressive enforcement that has implications for deal timing, the scope of agency engagement, and increased multilateral collaboration among reviewing agencies.

Continue reading