Netherlands Welfare Case Sheds Light on Explainable AI for AML-CFT

by Winston Maxwell and Xavier Vamparys [1]

The District Court of the Hague, Netherlands found that the government’s use of artificial intelligence (AI) to identify welfare fraud violated European human rights because the system lacked sufficient transparency and explainability.[2] As we discuss below, the court applied the EU principle of proportionality to the anti-fraud system and found the system lacking in adequate human rights safeguards. Anti-money laundering/countering the financing of terrorism (AML-CFT) measures must also satisfy the EU principle of proportionality. The Hague court’s reasoning in the welfare fraud case suggests that the use of opaque algorithms in AML-CFT systems could compromise their legality under human rights principles as well as under Europe’s General Data Protection Regulation (GDPR).[3] 

Background on the Invalidated Netherlands Law

A Netherlands law authorizes the government to collect seventeen kinds of data from government databases and create a risk report showing the likelihood that a given individual is receiving welfare benefits without entitlement. The District Court of the Hague found that welfare fraud is a serious social problem and that recourse by the government to an AI-based tool furthers a compelling social interest, provided the system satisfies the balancing test imposed under European human rights law known as the “proportionality test.” One key element of the proportionality test is whether the system has sufficient safeguards to mitigate risks to human rights—in this case the right to privacy. Transparency is one important safeguard, and the court found that the Netherlands government’s system lacked transparency in two important ways.

First, individuals were not informed of the existence of the system, nor given a general understanding of how data about them was being used. Second, neither the court nor individuals targeted by risk profiles were able to understand how the proprietary model operates and how it reaches a particular decision. This kind of understanding (often referred to as “explainability”[4]) is necessary to permit an individual to defend himself or herself against an unfavorable report, and to permit a court to verify the presence or absence of discrimination (for example, critics of the system argued that it discriminated against people who lived in poorer geographic areas).  Because of these shortcomings, the system failed the proportionality test.

Applications to AML-CFT

The Hague court’s decision may give a signal as to how AI-based AML-CFT measures could be evaluated under EU law. Like the Netherlands’ welfare fraud detection system, AML-CFT transaction monitoring systems generate alerts that may ultimately result in suspicious activity reports being sent to the government for investigation and enforcement.  As the Netherlands case demonstrates, algorithms that contribute to government decisions with potentially adverse effects on individuals are subject to high standards of explainability. Understanding a government algorithm is at the heart of an individual’s constitutional right to challenge administrative decisions and it is the duty of courts to verify the legality of systems deployed by the government. Yet explainability requirements can forestall implementation of such systems altogether. In France, for example, government algorithms are subject to detailed explainability requirements,[5] making recourse to machine learning algorithms difficult. In the Houston Federation of Teachers case,[6] a United States federal judge imposed similar transparency requirements under constitutional due process principles.

What’s Special About AML-CFT?

AML-CFT is a unique form of law enforcement legislation because the job of investigating potential crime is split between government and private-sector entities. Unlike in other sectors, like telecommunications,[7] AML-CFT requires private-sector entities to ferret out suspicious behavior by their customers and report their findings secretly to authorities. There are good reasons for this AML-CFT approach: Banks are in a better situation to analyze their customers’ records than are the police. More importantly, banks and other entities targeted by AML-CFT legislation could profit from money laundering and therefore may have economic incentives to turn a blind eye to criminal behavior. AML-CFT legislation seeks to neutralize these incentives. Nevertheless, AML-CFT creates an unusual situation in which private-sector entities are required by law to conduct police-like surveillance and report any suspicious activities to authorities.[8]

What Can Be Extrapolated from Principles Traditionally Applied to Government Algorithms?

The European human rights principles applicable to government-operated algorithms flow from a long line of cases involving government use of technology to fight crime and assure national security.[9] The most well-known case, and the most relevant for AML-CFT purposes, is Digital Rights Ireland, in which the Court of Justice of the European Union (CJEU) found an EU directive illegal because the data retention measures imposed on telecommunications operators went beyond what is necessary in a democratic society. [10]  The invalidated directive required Member States to enact laws to require telecommunications operators to retain all traffic data for up to two years in order to assist with potential law enforcement queries. The court found that the directive’s requirements were overbroad, targeting traffic data of all citizens regardless of their potential involvement in a crime. The directive also lacked sufficient safeguards regarding persons who could access the data.

The EU proportionality test applied by the CJEU and the European Court of Human Rights (ECtHR) contains three branches. The first branch requires that the measure pursue a legitimate purpose, such as fighting crime. The second branch requires that the measure be provided for by law, which means that the relevant measure must be set forth in a democratically-enacted law that is accessible and understandable to the public. The third branch requires that the measure be necessary in a democratic society, which means that the measure must generate the least possible interference with human rights while still getting the job done and be surrounded by appropriate safeguards. The last branch, “necessary in a democratic society,” is the most difficult to satisfy, imposing a sliding scale depending on the seriousness of the crime and the level of intrusion into human rights. Highly intrusive measures, such as real-time monitoring of communications, are permitted only for the most serious crimes and must be surrounded by the most stringent safeguards. In its decision on welfare fraud, the Hague court seemed to accept that welfare fraud is a serious enough problem to justify recourse to AI solutions. The court emphasized, however, the need for safeguards, including algorithmic transparency so that individuals can understand and challenge the risk score as well as the underlying algorithm.  

In Addition to the Proportionality Test, the “Police Directive” and GDPR Apply to AML-CFT

The Hague court limited its analysis to the proportionality principle, which flows from the European Convention on Human Rights and the European Charter of Fundamental Rights. Having found that the Netherlands law failed the proportionality test, the Hague court did not need to go further and analyze the system’s compatibility with other applicable legislation, such as the so-called “Police Directive,”[11] which applies to government data processing for law enforcement purposes, or the GDPR, which applies to public- and private-sector data processing. AML-CFT would likely be subject to all three bodies of law: The proportionality principle would apply to the laws and regulations creating the AML-CFT system, including the European AML-CFT Directive itself;[12] the Police Directive would apply to the national laws that dictate how law enforcement authorities handle personal data received from banks, including data contained in suspicious activity reports; and the GDPR would apply to the way in which banks and other entities subject to AML-CFT collect personal data from customers, create risk profiles, monitor transactions, and report suspicious activity to authorities.

In a dispute involving a bank’s AML-CFT practices, all three bodies of law would likely become intertwined. To defend against a claim that its transaction monitoring violated the GDPR, a bank would argue that it is required to conduct invasive transaction monitoring by AML-CFT legislation. The dispute would likely result in a prejudicial question being certified to the CJEU on the compatibility of national AML-CFT laws and regulations with the AML-CFT Directive, GDPR, the Police Directive, and the European Charter of Fundamental Rights. In the end, the CJEU would likely hold that AML-CFT legislation—whether national or European—must be interpreted in a manner consistent with the proportionality test, and would likely address whether the proportionality test was satisfied in the particular situation described by the national court in its certified question.

The Proportionality Test Requires an Evaluation of AML-CFT Legislation’s Efficacy

It is difficult to apply the proportionality test and the GDPR to AML-CFT because law enforcement efforts are split between private sector entities and government authorities. The proportionality test, in particular its “necessary in a democratic society” branch, requires an evaluation of the system’s efficacy and a comparison with the level of interference with human rights, the level of safeguards, and the availability of less intrusive options. This sort of comparison requires a global view of the system, starting with the bank’s transaction monitoring system and ending with government seizure of funds and prosecution of criminals.  Currently, we know that less than 1% of criminal funds are seized.[13] We also know that bank transaction monitoring systems are costly to operate and often generate more than 90% false positives.[14] This suggests that current measures are not terribly effective. But to our knowledge, no court or regulatory authority has attempted to evaluate the efficacy of the system as a whole with regard to its interference with human rights, the level of safeguards, and the availability of less intrusive means to obtain the same objective.  AI can potentially change the equation by increasing AML-CFT efficacy by detecting more suspicious activities. AI can also create new forms of privacy interference and require new safeguards, particularly in light of the opacity of machine learning algorithms.  But it is almost impossible to evaluate how AI would fit into the proportionality analysis of AML-CFT, partly because some of the important parameters of the proportionality equation are simply unavailable, including the level of success of the current approach in fighting criminality and its overall costs to financial institutions (and thus indirectly to their clients and society as a whole). The European Data Protection Supervisor warned that national AML-CFT laws and regulations must comply with the proportionality principle set forth in Digital Rights Ireland and other CJEU and ECtHR cases.[15] Yet to our knowledge, no court or regulatory authority has conducted such a proportionality test.

The GDPR Raises Issues of “Gold Plating”

The GDPR raises similar difficulties. The GDPR permits banks to conduct data processing to the extent required by law. Because AML-CFT legislation requires banks to deploy systems to identify suspicious activities, banks generally rely on the GDPR’s “required by law” legal basis for processing. But data protection authorities warn national bank regulators and banks against “gold plating,” i.e., going beyond the strict minimum required by AML-CFT legislation.[16]  AML-CFT regulations are vague and generally put the burden on financial institutions to define what transactions are suspicious and to design systems to detect them. In practice, this leads financial institutions to create customer profiling systems and risk scenarios that generate a large number of false positives. Bank regulators put pressure on financial institutions to make sure all risk profiles are covered and encourage institutions constantly to do more, particularly since money laundering patterns evolve over time, becoming more and more sophisticated. In a context where each bank is required to assess its own AML-CFT risks and decide for itself which measures and resources it must deploy to manage such risks (a “risk-based” approach) and where banking regulators push for ever-higher detection standards, it is difficult to know when a bank’s compliance efforts go beyond what is strictly necessary and become “gold plating” from a data protection standpoint.  In the telecommunications industry, by contrast, the question is simpler: Laws or government decrees specify exactly what kind of data telecom operators must retain and for how long. Going beyond these requirements would constitute “gold plating” and require a separate justification under the GDPR. Another GDPR-related issue is the bank’s processing of data related to criminal offenses, which is prohibited under the GDPR without a crystal-clear law authorizing the processing, combined with specific legal safeguards.[17] Suspicious activity reports generated by banks would appear to be a form of processing related to criminal offenses, yet the GDPR’s requirements do not appear to be met.

Conclusion: Applying New AI Tools to Transaction Monitoring

AI can help speed the processing of alerts and reduce the number of false positives. But the real power of AI lies in its ability to detect new patterns of criminal behavior, including links between seemingly unrelated transactions. This use of AI could lead to detection of criminal activities that currently escape notice, leading to increased efficacy in seizing criminal funds.[18] Law enforcement and intelligence agencies currently use AI algorithms on telecommunication data to detect signals of potential terrorist activity;[19] tax authorities use AI to detect signs of tax fraud.[20]

While AI tools can increase detection of crime, they also create new concerns for privacy, transparency, and explainability, as the Netherlands welfare fraud case demonstrates. Before banks and other financial institutions can integrate AI into transaction monitoring, they need to know how those tools would affect the proportionality of the entire system under European human rights law and how the tools would affect the GDPR analysis. We know that AI-based AML-CFT processes could create friction with two core GDPR principles: purpose limitation (data has to be collected for a specific, explicit, and legitimate purpose) and data minimization (only adequate, relevant, and limited data that is necessary for the purpose may be processed). Yet many of the parameters needed to conduct a proportionality or GDPR analysis of new AI tools are simply unavailable due to lack of visibility into the government side of AML-CFT processes. Thus, before asking the question of whether the use of AI tools for AML-CFT purposes would be compatible with human rights and GDPR constraints, we need first to understand how the human rights and GDPR frameworks apply to current AML-CFT processes. Unfortunately, the answer to that question is far from clear. 

Footnotes

[1] This blog entry is part of Telecom Paris’s interdisciplinary research program on Explainable AI for AML (XAI4AML), financed by the French National Research Agency and PWC.

[2] Rb. Den Haag 5 februari 2020 (Nederlands Juristen Comité voor de Mensenrechten/Staat Der Nederlanden) (Neth.), https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:865.

[3] Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1 (EU) [hereinafter GDPR].

[4] For a general introduction to AI explainability, see Valérie Beaudouin et al., Identifying the “Right” Level of Explanation in a Given Situation (Mar. 14, 2020) (unpublished manuscript), https://hal.telecom-paristech.fr/hal-02507316/document (PDF: 243 KB).

[5] See, e.g., Code des relations entre le public et l’administration [Code of Relations between the Public and the Administration] art. 311-3-1 (Fr.).

[6] Hous. Fed’n of Teachers, Local 2415 v. Hous. Indep. Sch. Dist., 251 F. Supp. 3d 1168 (S.D. Tex. 2017). In Houston Federation of Teachers, a teachers’ union challenged the school district’s use of an algorithm to calculate annual performance scores of teachers based on a number of criteria.  The court found that without access to “value-added equations, computer source codes, decision rules, and assumptions,” teachers could not exercise their constitutionally-protected rights to due process. Id. at 1179.

[7] In telecommunications, operators are required to cooperate with law enforcement by responding to data requests and interception orders, but they do not actively search for behavior by their users that might be linked to criminal activity.

[8] Many private-sector entities actively monitor transactions to detect fraud or cyber-attacks. However, anti-fraud and cyber-security are driven by the firm’s own economic incentives to protect the private-sector entity and its customers and are not generally accompanied by an obligation to report suspicious activity to law enforcement authorities, even if voluntary reporting is encouraged. By contrast, AML-CFT legislation is designed to protect interests—e.g., fighting drug trafficking and organized crime—that do not impact the bank’s bottom line, or indeed may impact the bottom line in the wrong way. In economic terms, the interests pursued by AML-CFT are not “internalized” by the bank, which means that legislation is needed to ensure that the bank applies a socially-optimal level of care.

[9] See, e.g., Case C-362/14, Schrems v. Data Prot. Comm’r,  http://curia.europa.eu/juris/document/document.jsf;jsessionid=B00BBD352381DF2FE9D14EDDA46347D2?text=&docid=169195&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=1220511 (Oct. 6, 2015) (dealing with the United States/EU Safe Harbor mechanism and holding that mass surveillance is incompatible with the EU Charter of Fundamental Rights); S. and Marper v. United Kingdom, 2008-V Eur. Ct. H.R. 167 (dealing with use of DNA data for fighting crime); Klass v. Germany, 2 Eur. H.R. Rep. 214 (1978) (dealing with “exploratory or general surveillance” measures).

[10] Joined Cases C-293/12 & C-594/12, Dig. Rights Ir. Ltd. v. Minister for Commc’ns, http://curia.europa.eu/juris/document/document.jsf?text=&docid=150642&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=206600 (Apr. 8, 2014).

[11] Directive 2016/680, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data by Competent Authorities for the Purposes of the Prevention, Investigation, Detection or Prosecution of Criminal Offences or the Execution of Criminal Penalties, and on the Free Movement of Such Data, and Repealing Council Framework Decision 2008/977/JHA, 2016 O.J. (L 119) 89 (EU).

[12] Directive 2018/843 of the European Parliament and of the Council of 30 May 2018 Amending Directive 2015/849 on the Prevention of the Use of the Financial System for the Purposes of Money Laundering or Terrorist Financing, and Amending Directives 2009/138/EC and 2013/36/EU, 2018 O.J. (L 156) 43 (EU). Such directive (the “5th AML directive”) was supposed to be transposed by Member States before January 10, 2020. The 6th AML directive will have to be transposed by Member States before December 3, 2020. Directive 2018/1673 of the European Parliament and of the Council of 23 October 2018 on Combating Money Laundering by Criminal Law, 2018 O.J. (L 284) 22 (EU).

[13] Europol, Does Crime Still Pay?: Criminal Asset Recovery in the EU (2016), https://www.europol.europa.eu/sites/default/files/documents/criminal_asset_recovery_in_the_eu_web_version_0.pdf (PDF: 1.1 MB); United Nations Office on Drugs and Crime, Estimating Illicit Financial Flows Resulting from Drug Trafficking and Other Transnational Organized Crime (2011), https://www.unodc.org/documents/data-and-analysis/Studies/Illicit_financial_flows_2011_web.pdf (PDF: 3.2 MB).

[14] See, e.g., IBM, Fighting Financial Crime with AI 5–7 (2019), https://www.ibm.com/downloads/cas/WKLQKD3W (PDF: 599 KB).

[15] EDPS Opinion on a Commission Proposal Amending Directive (EU) 2015/849 and Directive 2009/101/EC: Access to Beneficial Ownership Information and Data Protection Implications (Feb. 2, 2017), https://edps.europa.eu/sites/edp/files/publication/17-02-02_opinion_aml_en.pdf (PDF: 842 KB).

[16] Annex to the Article 29 Data Protection Working Party Opinion 14/2011 on Data Protection Issues Related to the Prevention of Money Laundering and Terrorist Financing (June 13, 2011), https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2011/wp186_en_annex.pdf (PDF: 227 KB).

[17] GDPR, supra note 3, at 39.

[18] See, e.g., IBM, supra note 14, at 7.

[19] The French law providing for the use of black-box algorithms by intelligence and police authorities to detect suspicious terrorist-related activities contains specific provisions on oversight of the algorithm’s use by an independent committee.

[20] Gaëlle Macke, Le fisc passe à l’intelligence artificielle pour traquer les fraudeurs, Challenges (Feb. 10, 2019, 10:47 AM), https://www.challenges.fr/economie/fiscalite/le-fisc-passe-a-l-ia-pour-traquer-les-fraudeurs_641400.

Winston Maxwell is the Director of Law & Technology Studies at Telecom Paris – Institut Polytechnique de Paris. Xavier Vamparys is the AI Ethics and Governance project manager at CNP Assurances and a visiting researcher at Telecom Paris – Institut Polytechnique de Paris.

Disclaimer

The views, opinions and positions expressed within all posts are those of the author alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of New York University School of Law.  PCCE makes no representations as to the accuracy, completeness and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.