by Helen V. Cantwell, Andrew J. Ceresney, Avi Gesser, Andrew M. Levine, David A. O’Neil, Winston M. Paes, Jane Shvets, Bruce E. Yannett, and Douglas S. Zolkind
On February 14, 2024, Deputy Attorney General Lisa O. Monaco announced an initiative within the U.S. Department of Justice to ramp up the detection and prosecution of crimes perpetrated through artificial intelligence (AI) technology, including seeking harsher sentences for certain AI-assisted crimes. Monaco also announced a new effort to evaluate how the Department can best use AI internally to advance its mission while guarding against AI risks.
Fighting AI-Assisted Crime
Last October, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, which Monaco said “charges the Justice Department to anticipate the impact of AI on our criminal justice system, on competition, and on our national security.” Monaco expressed that, while AI holds great promise, it “is also accelerating risks to our collective security,” and she highlighted two areas where DOJ will focus its AI enforcement efforts:
- Election Security. Monaco noted that, with more than four billion people around the world able to vote in elections this year, AI gives foreign adversaries a multitude of ways to harm voters. Bad actors can seek to use AI to “radicalize users on social media with incendiary content created with generative AI,” to “misinform voters by impersonating trusted sources and spreading deepfakes,” and can use “chatbots, fake images and even cloned voices” to spread falsehoods about elections and seek to deny people their right to vote.
- National Security. In February 2023, DOJ and the Commerce Department announced the “Disruptive Technology Strike Force”—an effort to enforce export control laws “to strike back against adversaries trying to siphon off America’s most advanced technology and use it against us.” Monaco said that, going forward, this Strike Force “will place AI at the very top of its enforcement priority list.” She declared AI “the ultimate disruptive technology” and stressed that DOJ will work to “neutralize [America’s] adversaries” so that they cannot use AI to threaten U.S. national security.
To investigate and prosecute AI-assisted crime—whether in the areas of election security, national security, or otherwise—Monaco said that DOJ will rely on “existing and enduring legal tools to their fullest extent” and will seek “to build on them where new ones may be needed.” As examples, she noted that “discrimination using AI is still discrimination,” “price fixing using AI is still price fixing,” and “identity theft using AI is still identity theft.”
Critically, Monaco noted that because AI—like a firearm—can “enhance the danger of a crime,” DOJ will now seek harsher sentences for criminal offenses “made significantly more dangerous by the misuse of AI.”
We will be monitoring closely to see if the Department issues further guidance regarding the circumstances in which prosecutors will seek such an AI-based sentencing increase. It also remains to be seen whether DOJ will incorporate such enhancements into its Corporate Enforcement Policy, such as by seeking harsher penalties for companies that engage in criminal wrongdoing through the misuse of AI technology.
Justice AI
In the same speech, Monaco announced “Justice AI”—an effort to study how best to use AI within the Department and to deploy AI technology to advance DOJ’s mission, while guarding against risks. DOJ recently appointed its first “Cyber AI Officer,” who will help to bring together DOJ’s law enforcement and civil rights teams, working with a newly formed “Emerging Technology Board” to advise the Attorney General on the responsible and ethical uses of AI by DOJ.
Monaco said that the Justice AI initiative will also convene individuals from across civil society, academia, science, and industry so that the Department can “draw on varied perspectives” in evaluating how to use AI.
Monaco noted that DOJ, like other federal agencies, is working to create guidance to govern its use of AI. Thus, for example, before DOJ uses a new AI system to “assist in identifying a criminal suspect” or to “support a sentencing decision,” the Department “must first rigorously stress test that AI application and assess its fairness, accuracy, and safety.” Monaco also pointed out that DOJ already has deployed AI for certain functions, such as tracing the sources of opioids, triaging tips submitted to the FBI, and synthesizing huge volumes of evidence in certain cases.
Helen V. Cantwell, Andrew J. Ceresney, Avi Gesser, Andrew M. Levine, David A. O’Neil, Winston M. Paes, Jane Shvets, Bruce E. Yannett, and Douglas S. Zolkind are Partners at Debevoise & Plimpton LLP. The post was first published on the firm’s blog.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).