State Governments Move to Regulate AI in 2024

by Louis W. Tompros, Arianna Evers, Eric P. Lesser, Allie Talus, and Lauren V. Valledor

Photos of authors

(Left to right) Louis W. Tompros, Arianna Evers, Eric P. Lesser, Allie Talus, and Lauren V. Valledor (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP)

Recently, New York Governor Kathy Hochul proposed sweeping artificial intelligence (AI) regulatory measures intended to protect against untrustworthy and fraudulent uses of AI. Presented as part of her FY 2025 Executive Budget, the bill would amend existing penal, civil rights and election laws—establishing a private right of action for voters and candidates impacted by deceptive AI-generated election materials and criminalizing certain AI uses, among other measures. Governor Hochul’s proposals are part of a wider trend of governors and state lawmakers taking more expansive measures to regulate AI that deserve attention from businesses developing and using AI.

In 2023, legislators in 31 states introduced at least 191 AI-related bills—a 440 percent increase from 2022—but only 14 of the bills became law, according to an analysis from BSA | The Software Alliance. This year, across the country, state leaders are introducing legislation and taking executive actions related to AI at a rapid pace.

State legislatures and governors have, so far, enacted regulations that have sought to address certain use cases for AI and to prepare states to regulate AI in the future. While no state has passed a comprehensive AI law and there remains very little AI-specific law on the books, that could change in 2024. Given that only 10 states have divided governments this year, there is the potential for more legislative proposals to be successful.

Meanwhile, at the federal level, on October 30, 2023, the Biden Administration issued its Executive Order on the Safe, Secure, Trustworthy Development and Use of Artificial Intelligence, establishing an effort across the federal government to guide the development and implementation of AI in the United States. Despite the potential for this move to drive federal regulation and policy, there have been few comprehensive federal proposals, and it is unlikely that we will see major legislation given that it is an election year.

The unlikelihood of comprehensive federal legislation directed at AI combined with the surge of activity at the state level raises the potential for companies to soon find themselves dealing with a patchwork of AI laws that are potentially inconsistent or otherwise difficult to navigate and comply with. This is what happened with privacy law: the federal government failed to pass a comprehensive privacy law, Europe passed the General Data Protection Regulation and states have swept in to fill the vacuum. There are now 14 state legislatures that have passed their own comprehensive privacy laws, with more expected this year. The European Union’s Artificial Intelligence Act will put added pressure on states to regulate the technology quickly and in a meaningful way. Organizations need to understand how these issues are playing out at the state level, understand the concerns that are driving legislators and state executive branches to act, and anticipate how these concerns should be reflected in any AI governance programs that they are building.

State governments have taken note of this potential for jurisdictional conflicts. An informal bipartisan group of more than 60 state legislators from nearly 30 states is working to avoid inconsistent regulations that could stifle innovation and overly burden industry. Led by Connecticut State Senator James Maroney (D), the group has heard from experts, reviewed existing frameworks and discussed ideas.[1]

Below is a snapshot of state activity that our lawyers are tracking and advising our clients on.

I. Preparing to Regulate AI

Several states have worked to understand AI and prepare themselves to regulate in the future by establishing task forces, working groups, audits and other projects/institutions. Often, these groups are tasked with submitting reports containing policy recommendations. At least 18 states established groups to study AI through either legislation or executive orders.

A. Legislation

Eleven states—Alabama, California, Colorado, Connecticut, Illinois, Louisiana, New Jersey, North Dakota, Texas, Vermont and Washington—have enacted laws that delegate research obligations to government or government-organized entities, such as a temporary task force, to learn about AI, its potential consequences and the role for policymakers. These laws generally delegate research obligations to government-organized entities and subject-matter experts, to create temporary working groups.

Some laws require these groups to submit reports with policy recommendations and could lead to regulations or a more concrete regulatory framework. For instance, in June 2023, Connecticut enacted legislation creating a working group. Among this group’s responsibilities was to make recommendations on AI legislation in a report, which it released at the start of February 2024. A few weeks later, on February 21, Connecticut state legislators introduced SB 2—a sweeping bill that, among other measures, seeks to prohibit the distribution of deceptive political deepfakes and revenge pornography and establish requirements on the implementation of AI systems in decision-making related to areas such as employment, government services, financial services, housing and healthcare. If passed, SB 2 could serve as model legislation for other states regulating AI. Similarly, Wisconsin lawmakers passed several bills last month that resulted from the work of bipartisan task forces.

B. Executive Orders

Meanwhile, governors from 11 states have issued executive orders establishing task forces or using already existing government agencies to direct guidance on AI regulation, with Massachusetts being a recent example.

On February 14, 2024, Massachusetts Governor Maura Healey signed an executive order establishing the Artificial Intelligence Strategic Task Force (Task Force). The 26-member Task Force will consist of representatives from the state and local government, organized labor groups and sectors like higher education, life sciences, finance, healthcare and technology. The Task Force will meet at least once a month to study AI’s impact on areas ranging from state agencies to private businesses. Moreover, the Task Force will advise Governor Healey to ensure Massachusetts supports AI-focused startups, promotes the creation of AI-related jobs, and enacts policies concerning AI development and regulation. Following this executive order, Governor Healey will also seek $100 million in upcoming legislation to support AI adoption and application in the state government and in Massachusetts’ private sectors. And at the Boston Chamber of Commerce breakfast on February 27, Governor Healey emphasized how, much like the state was the life sciences center in the 2000s, Massachusetts can become the center of AI development.

As with privacy law, California is among the leaders in AI regulation. On September 6, 2023, California Governor Gavin Newsom issued an executive order (Order) concerning generative AI. The Order enlisted existing government agencies to draft a report on the beneficial uses and potential risks of generative AI use in state agencies. By November 2023, the California Government Operations Agency released a report that identified six ways the government can use generative AI, while also discussing the various risks generative AI presents.

The Order also calls for risk assessments concerning the use of generative AI in California’s energy infrastructures and in “high-risk cases,” such as decision-making for essential goods and services. The Order delegates state agencies to create guidelines on the implementation of generative AI, including training for state employees to use these technologies. Lastly, the Order directs that legal counsel in all state agencies review regulatory issues concerning the development and implementation of generative AI.

Governors in Alabama, Maryland, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, Washington and Wisconsin have also issued executive orders that either establish AI task forces or call on existing government agencies to study and facilitate the implementation of AI in the government and to offer regulatory guidance on AI.

II. Regulating AI

State legislatures are currently regulating AI in two primary ways: (1) incorporating AI provisions into privacy laws and (2) developing AI-specific rules for certain use cases. Of the 20 AI-related laws enacted in 2023, six represent state comprehensive privacy laws that allow residents to opt out of “profiling” in furtherance of certain solely automated decisions, eight focus on regulating the government’s use of AI, three limit the use of AI in political campaigns, and other laws tackle other discrete issues.

Below are examples of state legislatures regulating—or proposing to regulate—AI through privacy laws and through AI-specific measures to address the use of the technology in elections, decisionmaking (algorithmic discrimination), employment, healthcare and state government.

A. Regulating AI Through Privacy Laws

New state privacy laws, such as the California Privacy Rights Act, define “personal information” broadly and create compliance obligations for companies that are subject to these laws. AI models are frequently trained on or process personal data to provide an output. These compliance obligations are therefore relevant both for training AI models and for many AI use cases. In addition, there are 10 US state comprehensive privacy laws—either already in force or that will come into force in the future—that create rights for residents to opt out of “profiling” in furtherance of certain automatic decisions.

Most of these laws define profiling as the “automated processing” of personal data to “evaluate, analyze or predict” characteristics of a person’s “economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” In these 10 states, residents have an opt-out right in connection with profiling used in furtherance of a decision that produces a legal or other similarly significant effect. Companies must also perform data protection impact assessments if the profiling presents a reasonably foreseeable risk of (1) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (2) financial, physical or reputational injury to consumers, (3) a physical or other intrusion on the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person, or (4) other substantial injury to consumers. In Colorado, there are regulations that outline additional requirements for companies engaging in “Solely Automated Processing” and “Human Reviewed Automated Processing” that are used to make important decisions about consumers.

B. Elections—Deepfakes

Deepfakes—or synthetic media that is either manipulated or wholly generated by AI to deceive or impersonate—have become a growing concern, particularly as the 2024 elections approach and as high-profile deepfake intimate or sexually explicit images spread. For example, just days before the 2024 New Hampshire primary this January, voters received an AI-generated robocall imitating President Joe Biden’s voice. The recording discouraged New Hampshire voters from voting, requesting that they save their votes for the November election. As AI technology improves at an unprecedented speed, states are trying to keep up and maintain election integrity by regulating the use of AI in elections.

1. Legislation

Both political parties have become concerned about the use of AI-generated deepfakes in elections and their ability to be used to manipulate voters. Texas and California enacted laws in 2019. In 2023, Michigan, Minnesota and Washington passed laws that either prohibit or limit the use of AI-generated deepfakes in elections or require disclosures. Liability varies among these laws. For instance, in Minnesota, use of a deepfake to influence an election constitutes a crime, whereas Washington’s law only imposes civil liability for a similar offense. All three laws received bipartisan support.

As we approach November, there has been a surge in state legislators proposing similar laws. In the first two months of this year, legislators introduced over 50 bills that would regulate the use of deepfakes in elections. Like the laws passed in 2023, these proposals range from requiring disclosures on campaign materials using AI-generated content to prohibiting the use of deepfakes in campaign materials within a certain period before an election. For instance, on March 1, 2024, the Florida legislature approved a measure that would require a specific disclaimer on political advertisements created in whole or in part using generative AI. The bill would apply to those who pay for, sponsor or approve the advertisement. The measure would allow people to report potential violations to the Florida Elections Commission. Failure to include the disclaimer would be considered a first-degree misdemeanor, and violators would also be subject to civil penalties. Should Governor Ron DeSantis sign the bill, these requirements would be in effect on July 1, 2024. Meanwhile, Ohio‘s HB 410, introduced on February 12, 2024, would prevent the use of deepfakes 90 days before an election when the purpose is to influence an election.

In addition to election concerns, proposals address other uses of deepfakes. Minnesota’s HB 1370, for example, would prohibit the use of deepfakes both in elections and to depict nonconsensual pornography.

2. Executive Actions

As mentioned above, in her executive budget, New York’s Governor Hochul proposed regulations on generative AI in elections and in other contexts. This action comes after Governor Hochul vetoed a bill in November 2023 that would have created a commission to study AI before the state proposed regulations.

Governor Hochul’s proposal would create a private right of action for both registered voters and candidates impacted by deceptive AI-generated images, videos and audio in election materials. It would also require disclosure of AI use in all forms of political communication within 60 days of an election.

In addition, the proposal would expand protections against the use of deepfakes in a broader context. For instance, the proposal amends existing revenge porn statutes to include images, video and audio generated by AI. Moreover, the proposal would impose criminal liability for the unauthorized use of AI in impersonation, identity theft and coercion, as well as the unlawful dissemination of AI-generated images, videos or audio recordings.

C. Algorithmic Discrimination

Some studies have shown the potential for automated systems to produce inequitable outcomes and amplify existing bias, and states are therefore looking for ways to protect individuals against “algorithmic discrimination,” or an automated decision tool’s differential treatment of or impact on an individual or group based on their protected class. Although AI generally is of bipartisan concern, efforts to address algorithmic discrimination tend to receive Democratic support. Legislatures in the District of Columbia, California, Connecticut, Hawaii, Illinois, New Jersey, New York, Oklahoma, Rhode Island, Vermont, Virginia, and Washington are currently considering bills that would impose varying obligations on those that create these tools (developers) and/or those that use these tools (deployers) to minimize the risk of the potentially discriminatory impact of these tools. Many of these bills are modeled after one another.

The bills generally target tools that might result in “consequential decisions” around people’s rights and opportunities, such as employment, education, housing, healthcare or health insurance, and financial services. Some bills, such as Connecticut’s SB 2, impose different obligations for “general,” “generative” and “high-risk” AI systems.

Among other safeguards such as governance and disclosure requirements, all the bills require some form of impact assessment. Often, they require that those assessments be made available to government agencies and/or the public. Under California’s AB 2930, for example, the impact assessments would provide an analysis of the potential adverse impacts on protected classes, a description of the safeguards or measures that have been or will be implemented to address reasonably foreseeable risks of algorithmic discrimination, and a description of how a human will use or monitor the tool as well as how the tool has been or will be evaluated for validity or relevance, in addition to other requirements.

Most of these proposals would not allow individuals to sue businesses for discrimination. A prominent example is California’s AB 2930, which was recently introduced without earlier language that would have allowed for a private right of action. Instead, only the state attorney general and public attorneys would be able to file suits, while the Civil Rights Department would be able to investigate allegations of discrimination.

D. Automated Employment Decision-Making

Combating algorithmic discrimination in hiring and employment decisions has also been a specific focus at the state and local levels.

Laws regulating the use of AI in hiring were on the books even before this recent surge in attention on AI. Both Illinois’ Artificial Intelligence Video Interview Act and Maryland’s H.B. 1202 regulate the use of facial recognition/video analysis tools during preemployment interviews unless informed consent is obtained first.

New York City was the first major jurisdiction to require a bias audit. Enacted in 2021 and effective as of July 5, 2023, the New York City Automated Employment Decision Tools Law prohibits employers and employment agencies from using automated employment decision tools (AEDTs) unless (1) the tool has been subjected to a bias audit within a year of its use or implementation, (2) information about the bias audit is publicly available, and (3) employers provide certain written notices to employees or job candidates.

The law’s definition of an AEDT includes any automated process that either replaces or substantially assists discretionary decision-making for employment decisions. An automated process that screens resumes and schedules interviews based on such screening would be subject to the law’s requirements, for example. But an automated process that simply transfers information from a resume to a spreadsheet would not be subject to the law’s requirements.

More recently, several states—including Massachusetts, New Jersey, New York, Vermont and Washington—introduced bills targeting AEDTs. Although there are some differences between these bills, they generally would require employers to provide written disclosure to employees before using AI or generative AI to make employment-related decisions. And, like other automated decision tools, AEDTs would have to go through regular impact assessments or bias audits. Under some bills, including New York’s S7623A, employers could not rely “solely” on automated tools to make hiring, promotion, termination, disciplinary or compensation decisions.

E. Healthcare

In 2023, at least 11 states considered legislation related to the use of AI in physical and mental health services, according to a report from the National Conference of State Legislatures. Proposed legislation on AI in the healthcare sector falls into two primary categories: clinical use and use by insurers.

1. Clinical Use

Georgia was one of the first states to expressly permit the use of AI in a clinical setting. Enacted on May 2, 2023, and effective on July 1, 2023, Georgia’s HB 203 allows the use of an “assessment mechanism,” including AI devices, to conduct eye assessments. However, HB 203 requires that data from these eye assessments are not the sole basis for issuing a prescription. HB 203 also requires that before using an assessment mechanism, prescribers ensure the patient had a traditional eye exam in the past two years. Massachusetts, Rhode Island and Texas also proposed legislation regulating the use of AI in mental health services. Although these bills failed in Massachusetts and Texas, Rhode Island recommended further study on the measure.

While some states are seeking ways to implement AI in the healthcare sector, others are legislating to restrict its use. For instance, Maine and Illinois introduced bills that would require healthcare facilities to defer to a nurse’s judgment over any decision made by AI.

2. Use by Insurers

Meanwhile, state legislation concerning insurers’ use of AI has so far focused on preventing discrimination or increasing transparency. For instance, on February 17, 2023, California introduced a bill that would prevent healthcare service plans or insurers from discriminating “on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decisionmaking.” Meanwhile, Oklahoma’s proposed bill HB 3577, introduced on February 5, 2024, would require health insurers to disclose on their websites use of AI algorithms and to submit any AI systems to the state’s department of insurance for review. These measures bolster any requirements put forward by state insurance regulators, though to date only Colorado has formal regulations in place for AI use in the life insurance industry specifically.

Moving forward, state regulators may look to the model bulletin on the use of AI in insurance adopted by the National Association of Insurance Commissioners in December 2023. The bulletin, which is intended to be adopted by individual state insurance departments, sets expectations for how insurers should govern the development and use of AI technologies and outlines information that regulators would be able to request from an insurer regarding its use of AI during an investigation.

F. AI in State Government

Several measures by state legislatures and executive branches focus on AI’s implementation in state governments. These measures are relevant for private businesses that conduct business with states and for understanding where regulations may be headed more broadly.

For example, under a law passed in Connecticut last year, neither any state agency nor the Judicial Department can implement any AI systems that result in any unlawful discrimination or disparate impact, and no state agency may enter into any contract with a business unless the contract contains a provision requiring compliance with all applicable data privacy provisions.

Similarly, Washington Governor Jay Inslee’s January 30, 2024, executive order requires vendors of government agencies to follow the National Institute for Science and Technology’s AI Risk Management Framework. Moreover, Virginia Governor Glenn Youngkin’s executive order enacted the AI Policy Standards and AI Information Technology (IT) Standards. The AI Policy Standards call for rigorous vetting of AI vendors used by state agencies. Meanwhile, the AI IT Standards set out specific design, business and security requirements for AI systems used by state agencies and suppliers, or third parties acting on their behalf.

Footnotes

[1] Caitlin Andrews, What’s next for US state-level AI legislation, IAPP (Nov. 3, 2023), https://iapp.org/news/a/whats-next-for-us-state-level-ai-legislation/.

Louis W. Tompros is a Partner, Arianna Evers is Special Counsel, Eric P. Lesser is Senior Counsel, and Allie Talus and Lauren V. Valledor are Associates at Wilmer Cutler Pickering Hale and Dorr LLP. This post first appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).