Category Archives: Discrimination

California Adopts New Employment Al Regulations Effective October 1, 2025

by Arsen Kourinian, Ruth Zadikany, and Remy N. Merritt

Left to right: Arsen Kourinian, Ruth Zadikany, and Remy N. Merritt (photos courtesy of Mayer Brown)

The California Civil Rights Council (CRC) recently announced that it has finalized regulations that clarify how California’s anti-discrimination laws apply to the use of artificial intelligence (Al) and automated decision systems (ADSs) in employment decision-making (the “Regulations”). The Regulations provide that the use of an ADS (including Al) in making employment decisions can violate California law if such tools discriminate against employees or applicants — either directly or due to disparate impact — on the basis of protected characteristics (including race, age, religious creed, national origin, gender, and disability).

Continue reading

DOJ Defines “Illegal DEI,” Warns Recipients of Federal Funds to Take Notice

by Adam S. Hickey, Marcia E. Goodman, Ruth Zadikany, and Hiral D. Mehta

Left to right: Adam S. Hickey, Marcia E. Goodman, Ruth Zadikany, and Hiral D. Mehta (photos courtesy of Mayer Brown)

On July 29, 2025, U.S. Attorney General Pam Bondi issued Guidance for Recipients of Federal Funding Regarding Unlawful Discrimination (the “Guidance”). Following the creation of the Civil Rights Fraud  Initiative by the Department of Justice (“DOJ”), and joint guidance issued by DOJ and the U.S. Equal Employment Opportunity Commission (“EEOC”) on “unlawful DEI-related discrimination“, the Guidance is the most tangible guidance released to date on what the administration views as “illegal DEI” and a likely roadmap for DOJ’s False Claims Act (“FCA”) investigations under the Civil Rights Fraud Initiative.

Continue reading

DOJ Civil Division Prioritizes Illegal DEI

by Jennifer Loeb, Austin Evers, Grace Bruce, and Young Park

From left to right: Jennifer Loeb, Austin Evers, Grace Bruce, and Young Park (photos courtesy of Freshfields Bruckhaus Deringer LLP)

Combatting “illegal” Diversity, Equity and Inclusion (DEI) remains a “Day One” priority in Washington. President Trump issued executive orders on DEI on his first day in office. Attorney General Bondi likewise issued her own memos on her first day at the Department of Justice. And now, the new head of the Department of Justice’s Civil Division has followed suit and issued his own memo on his first day, marking DEI-related topics as two of the Division’s top five priorities. This is yet another indicator that the administration appears to be shifting into the enforcement phase of its DEI reset. Health care and life sciences companies have particular reason to take note.

Continue reading

Supreme Court Rejects Heightened Test for “Reverse Discrimination” Claims Under Title VII

by Matthew M. Yelovich, Jennifer Kennedy Park, Christopher R. Kavanaugh, and Ethan Singer

From left to right: Matthew M. Yelovich, Jennifer Kennedy Park, Christopher R. Kavanaugh, and Ethan Singer (photos courtesy of Cleary Gottlieb Steen & Hamilton LLP)

On June 5, 2025, the Supreme Court unanimously ruled in Ames v. Ohio Department of Youth Services that plaintiffs who belong to a majority group do not face a heightened burden to establish a disparate treatment claim under Title VII of the Civil Rights Act of 1964 (“Title VII”). The Court’s holding resolves a significant circuit split and affirms that Title VII’s protections apply equally to all individuals. This decision arrives as the Trump Administration has launched significant new initiatives to bring Title VII and civil rights investigations and claims against employers with diversity, equity, and inclusion (“DEI”) programs that the Administration views as unlawful. In light of this decision and the various DEI-related Executive Orders, employers should consider the following:

  • Employers should continue to carefully scrutinize human resource related programs that consider demographic characteristics in any way.
  • Employers should review their whistleblower programs, policies, and practices to ensure they are robust around discrimination-related issues.
  • Notably, the Ames decision considered a disparate treatment claim, and the Administration has ordered the Equal Employment Opportunity Commission (“EEOC”) and other agencies to cease pursuing disparate impact investigations and claims.[1]

Continue reading

Sweeping AI Legislation Under Consideration in Virginia

by Beth Waller and Patrick Austin

Photos of authors

Beth Burgin Waller and Patrick J. Austin (photos courtesy of Woods Rogers Vandeventer Black PLC)

Virginia, a leader in technology and privacy related regulations, is methodically examining artificial intelligence legislation.  In particular, significant legislation establishing a regulatory framework for high-risk Artificial Intelligence (AI) systems is currently being considered by the Virginia General Assembly’s Joint Commission on Technology and Science (JCOTS). JCOTs – a permanent legislative agency that studies and develops technology and science related policies in Virginia – has held several hearings on the topic in an effort to hear expertise related to AI issues and has formed an AI specific Subcommittee.  The JCOTS AI Subcommittee is considering two pieces of legislation that would govern the use of high-risk AI systems by public entities and private sector entities.

Continue reading

“Operation Chokepoint 2.0”: De-Banking Policies and the Adverse Use of Reputational Risk in Bank Supervision

by Stephen T. Gannon, Max Bonici, Elizabeth Lan Davis, and Kristal Rovira

Photos of the authors

Left to Right: Stephen T. Gannon, Max Bonici, Elizabeth Lan Davis, and Kristal Rovira (photos courtesy of Davis Wright Tremaine LLP)

How subjective supervisory standards suppressed innovation and damaged innovators.

“The power to regulate—in addition to the power to tax—is the power to destroy.”

Peter Wallison, Judicial Fortitude (2018)

As we have previously noted, we expect that the second Trump Administration will be significantly more favorable to crypto than the Biden Administration, especially with the recent appointment of David Sacks as the Administration’s “Crypto Czar.” We anticipate that in short order the new Administration will address “de-banking,” a regulatory practice that has vexed the digital asset industry—and banking in general—over the last several years. In this context, “de-banking” means canceling banking services to crypto entities and individuals associated with them or crypto activities. It is a practice that has been sharply criticized and has become even less comprehensible as the digital asset industry has matured and embraced (indeed, has sought) reasonable regulation. In the last several days the attention paid to this issue has increased sharply as a result of comments by Marc Andreessen on the Joe Rogan podcast.

Regrettably, the de-banking problem is not new. De-banking crypto is simply the latest variation of regulators using vague and amorphous standards to supervise bank conduct through the subjective lens of what the federal banking agencies call “reputational risk.”

Below we discuss how we got here and some ways forward.

Continue reading

Maryland Legislature Passes State Privacy Bill with Robust Requirements and Broad Threshold for Application

by Marshall Mattera and Amanda Pervine

Photo of the author

Marshall J. Mattera (photo courtesy of Hunton Andrews Kurth)

The Maryland legislature recently passed the Maryland Online Data Privacy Act of 2024 (“MODPA”), which was delivered to Governor Wes Moore for signature and, if enacted, will impose robust requirements with respect to data minimization, the protection of sensitive data, and the processing and sale of minors’ data.

Continue reading

Prohibited AI Practices—A Deep Dive into Article 5 of the European Union’s AI Act

by Dr. Martin Braun, Anne Vallery, and Itsiq Benizri

photo of authors

From left to right: Dr. Martin Braun, Anne Vallery and Itsiq Benizri. (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP).

Article 5 of the AI Act essentially prohibits AI practices that materially distort peoples’ behavior or that raise serious concerns in democratic societies.

As explained in our previous blog post, this is part of the overall risk-based approach taken by the AI Act, which means that different requirements apply in accordance with the level of risk. In total, there are four levels of risk: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations.

Continue reading

The European Court of Justice Tightens the Requirements for Credit Scoring under the GDPR

by Katja Langenbucher

Photo of Professor Katja Langenbucher

Professor Katja Langenbucher (photo courtesy of author)

The quality of a credit scoring model depends on the data it has access to. Yesterday, the European Court of Justice (ECJ) decided its first landmark case on data protection in a credit-scoring situation. The court issued a preliminary ruling involving a consumer’s request to disclose credit-score related data against a German company (“Schufa”). The practice of credit reporting and credit scoring varies enormously across Europe. Somewhat similar to the US, the UK knows separate credit reporting and scoring agencies. In France, the central bank manages a centralized database that is accessible to credit institutions, which establish their own proprietary scoring models. In Germany, a private company (the “Schufa”) has a de facto monopoly, holding data on 68 million German citizens and establishing the enormously wide-spread “Schufa”-score. Banks look to that score when extending credit, as do landlords, mobile phone companies, utility suppliers, and, sometimes, potential employers. This every-day use stands in stark contrast with a lack of transparency as to which data Schufa collects and how it models the score.

Continue reading

California Privacy Protection Agency Publishes Draft Regulations on Automated Decisionmaking Technology

by Hunton Andrews Kurth LLP

photo of the author

On November 27, 2023, the California Privacy Protection Agency (“CPPA”) published its draft regulations on automated decisionmaking technology (“ADMT”). The regulations propose a broad definition for ADMT that includes “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” ADMT also would include profiling, which would mean the “automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”

Continue reading