The NYU Law Program on Corporate Compliance and Enforcement’s Executive Director attended the International Association of Privacy Professional‘s annual Global Privacy Summit in Washington, D.C., earlier this week. This post provides observations and highlights from the conference.
CFPB Director Rohit Chopra Highlights Priorities for Enforcement Policy, FinTech, and “Dark Patterns”
In a fireside chat, Consumer Financial Protection Bureau (CFPB) Director Rohit Chopra outlined key concerns and enforcement priorities for the agency.
On the subject of cryptocurrencies and banking, Director Chopra opined that much of the discussion around crypto was a “distraction” and that one actual source of his concern in this area was the comingling of banking and technology. Using as an example what he characterized as Meta’s “brazen” attempt to create its own currency – the unsuccessful Libra project – he highlighted widely-used technology-driven payment systems such as Apple Pay, Venmo, WeChat Pay, and others as posing potential privacy, safety and soundness, and fairness risks to consumers. For example, with the recent banking crisis as a backdrop, Director Chopra asked what would happen if there was a run on one of these payment systems, and what would happen to any money that consumers had stored with the networks. In a similar vein, Director Chopra inquired about the state of consumers’ rights to dispute fraudulent transactions on digital payment networks, wishing to see something similar to the rights consumers currently have with credit cards.
When it comes to the merger of banking and technology, Director Chopra also appeared concerned with issues of privacy and civil rights, noting the potential for consumer surveillance, unfair profiling, and discrimination. He does, however, see a positive use of consumer data to produce more fair and accurate evaluations of consumer creditworthiness, if properly regulated, but noted that the Fair Credit Reporting Act requires credit decisions, including those made by AI, to be explainable.
Moving away from banking and finance, Director Chopra discussed “dark patterns,” which he described as the digital “tricks” that firms use to induce consumers to do and buy things they do not actually want. Observing that the CFPB is now working with technologists to better understand what the private sector is doing and to detect abuses, Director Chopra highlighted the CFPB’s 2022 enforcement action against Transunion and a senior executive, in which the CFPB alleged that Transunion was allegedly violating a previous order and was allegedly improperly inducing consumers to sign up for credit monitoring services but making it difficult for them to opt out of the services and future payments.
Finally, Director Chopra outlined the CFPB’s enforcement priorities. He held up Facebook’s $5 billion settlement in 2019 with the Federal Trade Commission (FTC) over alleged privacy abuses as an example of what regulators should not be doing in terms of enforcement, calling the resolution an “embarrassment” because (i) it allowed the firm to simply buy its way out of trouble by paying a fine without actually having to change the business model that allegedly led to the abuses in the first place and (ii) did not hold any individuals accountable. (Note, at the time of the settlement, Director Chopra was one of two FTC Commissioners to file lengthy dissents from the resolution). Going forward, Director Chopra intends to prioritize punishing repeat offenders, crafting settlements to focus less on punitive fines and more on compelling businesses to change their business models away from abusive practices, and to hold individuals accountable.
AI: Skynet Is Not Becoming Self-Aware But Poses “New and Substantial” Risks
FTC Commissioner Alvaro Bedoya delivered a keynote speech on AI. Regarding “generative AI” – the type of AI that can write letters, provide cooking recipes, or create images in response to user prompts – Commissioner Bedoya said that one did not need to heed any “extreme views” to believe that the technology can produce both “awe and wonder” and “new and substantial risks.”
With respect to risks, Commissioner Bedoya said he does not see the technology posing “existential threats to our society.” However, he emphasized the capacity for the technology to cause harm to consumers because it produces results that are not explainable, emphasizing that the nature of the technology prevents even experts and technologists from “opening the hood to see how it works.” As an example, Commissioner Bedoya cited the fact that large language models (LLMs), of which ChatGPT is one example, can play chess, despite no one expressly seeking to “train” them to do so. In terms of particular harms that concern Commissioner Bedoya, he urged developers “think twice” before they deploy a product that leads people to think it is a human being or affects people’s mental health, particularly that of children or teens.
On the subject of regulation, Commissioner Bedoya took issue with the notion that AI is “unregulated,” stating that such thinking benefits firms that seek to avoid regulation. He cited a number of existing laws and powers that regulators and consumers have to police the conduct of firms that deploy AI technology: (i) existing unfair, deceptive, and abusive trade practices laws; (ii) civil rights laws; (iii) tort and product liability laws; and (iv) laws like the Fair Credit Reporting Act that require decisions made by AI in certain circumstances to be explainable. In the end, Commissioner Bedoya called for a collaborative approach between developers, regulators, and civil society entities, stating that the public sector needs to be involved in “stress testing” the AI models.
Joseph Facciponti is the Executive Director of PCCE and a former prosecutor at the U.S. Attorney’s Office for the Southern District of New York.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).