Technology will soon force broad changes in how we conceive of corporate liability. The law’s doctrines for evaluating corporate misconduct date from a time when human beings ran corporations. Today, breakthroughs in artificial intelligence and big data allow automated systems to make many business decisions like which loans to approve,[1] how high to set prices,[2] and when to trade stock. [3] As corporate operations become increasingly automated, algorithms will come to replace employees as the leading cause of corporate harm. The law is not equipped for this development. Rooted in an antiquated paradigm, the law presently identifies corporate misconduct with employee misconduct. If it continues to do so, the inevitable march of technological progress will increasingly immunize corporations from most civil and criminal liability.
In a forthcoming article, The Extended Corporate Mind: When Corporations Use AI to Break the Law,[4] I spell out the challenge automation poses for corporate law. The root of the problem is that the law has nothing to say when automated systems are responsible for the “thinking” behind corporate misconduct. Most civil and criminal corporate liability requires evidence of a deficient corporate mental state, like purpose to discriminate,[5] knowledge of inside information,[6] or agreement to fix prices.[7] The primary doctrine for attributing mental states to corporations—respondeat superior—defines corporate mental states in terms of employee mental states.[8] When corporations misbehave through their employees, respondeat superior offers relatively straightforward results.[9] But when corporations use algorithms to misbehave, the liability inquiry quickly aborts.[10] Algorithms are not employees, nor do they have independent mental states. So respondeat superior cannot apply. This is true even if, from the outside, a corporation acting through an algorithm looks like it is behaving just as purposefully or knowingly as a corporation that uses only employees.
The present state of the law is worrisome because corporate automation will grow exponentially over the coming years.[11] This all but guarantees that corporations will escape accountability as their operations require less and less human intervention. Though algorithms promise to make corporations more efficient, they do not remove (or even always reduce) the possibility that things will go awry.[12] The worry is concrete. Some current examples of corporate algorithmic harm that merit a searching liability inquiry include:[13]
- A lender’s automated platform approves mortgages in a fashion that has a discriminatory racial impact but might also have a business justification.[14]
- A financial institution’s trading algorithm makes trades on the basis of material, non-public information.[15]
- Competing retailers’ pricing algorithms set prices at matching, super-competitive levels.[16]
The incentive structure that current law sets out for corporations will accelerate the law’s obsolescence. Safe algorithms take years to program, train, and test. Their rollout should be piecemeal, with cautious pilots followed by patches and updates to address lessons learned. By shielding corporations from liability for many algorithmic harms, the law encourages corporations to be cavalier. Businesses keen to manage their liabilities will seek the safe haven of algorithmic misconduct rather than chance liability for misconduct by human employees. We should expect corporations to turn to algorithms prematurely, before underlying technology has been sufficiently tested for socially responsible use.[17]
Fixing the problem of algorithmic corporate misconduct is not a simple matter of finding a nefarious corporate programmer and then applying respondeat superior to hold her employer liable. Certainly, there will be cases where an employee purposely or knowingly designs a corporate algorithm to break the law. In such scenarios, respondeat superior will suffice. In most cases, though, no such employee will exist. Sometimes, employees may have been reckless or negligent in designing harmful algorithms. While respondeat superior may help for liability schemes that only require recklessness or negligence, many of the most significant corporate liability statutes require more demanding mental states like purpose or knowledge.[18] Furthermore, algorithmic will often produce harms even without employee recklessness or negligence. The most powerful algorithms literally teach themselves how to make decisions.[19] This gives them the ability to solve problems in unanticipated ways, freeing them from the constraining foresight of human intelligence.[20] One consequence of this valuable flexibility is that these algorithms can create harms even if all the humans involved are entirely innocent.[21]
To plug the algorithmic liability loophole, the law needs a framework for extending its understanding of the corporate mind beyond the employees whose shoes algorithms are coming to fill. The ideal solution would find a way to treat corporations the same regardless of whether algorithms or employees are behind the wheel. To have a realistic prospect of persuading lawmakers, the solution should steer clear of science fictions like robot minds and algorithmic agency.[22] In The Extended Corporate Mind, I propose a detailed doctrine that I think can do the work. The basic idea is that corporations that use algorithms to fulfill employee roles should be treated as having the same mental states as corporations that engage in the same patterns of behavior using employees. Legal parity between employee and algorithmic misconduct would remove the incentives the law presently gives corporations to rush toward automation. To be clear, corporate automation is inevitable and desirable. But we should not allow it to compromise our ability to hold corporations accountable when they break the law.
Footnotes
[1] Mikella Hurley & Julius Adebayo, Credit Scoring in the Era of Big Data, 18 Yale J.L. & Tech. 148, 190—93 (2016).
[2] Emilio Calvano et al., Artificial Intelligence, Algorithmic Pricing, and Collusion, Vox (Feb. 3, 2019).
[3] Bernard Marr, The Revolutionary Way of Using Artificial Intelligence in Hedge Funds, Forbes (Feb. 15, 2019, 1:48 AM). Computer scientists proved years ago that algorithms could teach themselves to manipulate markets. See generally Enrique Martínez-Miranda et al., Learning Unfair Trading: A Market Manipulation Analysis from the Reinforcement Learning Perspective, Ass’n for Advancement of Artificial Intelligence (2015) (PDF: 504 KB), .
[4] 97 N.C. L. Rev. (forthcoming 2020).
[5] See Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 711—12, 726 (2016) (discussing the unavailability of disparate impact arguments to show algorithmic discrimination).
[6] 17 C.F.R. § 240.10b5-1 (2019).
[7] Sherman Antitrust Act, 15 U.S.C. § 1 (2012).
[8] Restatement (Third) of Agency § 2.04 (Am. Law. Inst. 2006).
[9] These straightforward results are not particularly compelling. See Mihailis E. Diamantis, Corporate Criminal Minds, 91 Notre Dame L. Rev. 2049, 2056—58 (2016) (broadly critiquing the use of respondeat superior).
[10] See Jack M. Balkin, Knight Professor of Constitutional Law & the First Amendment, Yale Law School, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data (Oct. 27, 2016), in 78 Ohio St. L.J. 1217, 1234 (2017).
[11] See Sam Ransbotham et al., Reshaping Business with Artificial Intelligence, MIT Sloan Mgmt. Rev. (Sept. 6, 2017).
[12] See Mark A. Lemley & Bryan Casey, Remedies for Robots, 86 U. Chi. L. Rev. 1311, 1318-19 (2019); Cade Metz, Is Ethical A.I. Even Possible?, N.Y. Times (Mar. 1, 2019).
[13] A growing scholarly literature discusses others. See, e.g., Sonia K. Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. Rev. 54 (2019)
[14] See generally Robin Nunn, Discrimination and Algorithms in Financial Services: Unintended Consequences of AI, 23 Cyberspace Law. NL 4 (2018). For a similar example involving hiring ads, see Esha Bhandari & Rachel Goodman, ACLU Challenges Computer Crimes Law That is Thwarting Research on Discrimination Online, ACLU: Free Future (June 29, 2016, 10:00 AM); Hurley & Adebayo, supra note 1, at 193—95 (discussing the challenges of such cases).
[15] See Regulation (EU) 596/2014 of the European Parliament and of the Council of 16 April 2014 on market abuse (market abuse regulation) and repealing Directive 2003/6/EC of the European Parliament and of the Council and Commission Directives 2003/124/EC, 2003/125/EC and 2004/72/EC, 2014 O.J. (L 173) 1, 1; Directive 2014/57/EU of the European Parliament and of the Council of 16 April 2014 on criminal sanctions for market abuse (market abuse directive), 2014 O.J. (L 173) 1, 1; Martínez-Miranda et al., supra note 3, at § 6; Renato Zamagna, The Future of Trading Belong to Artificial Intelligence, Medium: Data Driven Investor (Nov. 15, 2018).
[16] Calvano et al., supra note 2; Ariel Ezrachi & Maurice E. Stucke, Two Artificial Neural Networks Meet in an Online Hub and Change the Future (of Competition, Market Dynamics and Society) 2—26 (Oxford Legal Studies Research Paper No. 24/2017); Greg Rosalsky, When Computers Collude, NPR: Planet Money (April 2, 2019, 7:30 AM).
[17] Microsoft President and Chief Legal Officer Brad Smith has remarked, “We don’t want to see a commercial race to the bottom. Law is needed.” Metz, supra note 13.
[18] See Mihailis E. Diamantis, Functional Corporate Knowledge, 2019 Wm. & Mary L. Rev. (forthcoming).
[19] David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 655 (2017); Jason Brownlee, Supervised and Unsupervised Machine Learning Algorithms, Machine Learning Mastery (Mar. 16, 2016).
[20] See Lemley & Casey, supra note 13.
[21] Barocas & Selbst, supra note 5, at 729; Ryan Abbott and Alex F. Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 52 U.C. Davis L. Rev. (forthcoming 2019); Kevin Petrasic et al., Algorithms and Bias: What Lenders Need to Know, White & Case 1, 5—6 (2017) (PDF: 692 KB).
[22] See Joanna J. Bryson, Mihailis E. Diamantis, & Thomas D. Grant, Of, For, and By the People: The Legal Lacuna of Synthetic Persons, 25 Artificial Intelligence & L. 273, 278 (2017) (“[T]alk of fictional narrative is fun, talk of numbers allows us to build airplanes, and talk of morality allows us to organize socially.”).
Mihailis E. Diamantis is an Associate Professor at the University of Iowa, College of Law.
Disclaimer
The views, opinions and positions expressed within all posts are those of the author alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.