Steering the AI Ship: Is Your Board Ready to Navigate Complexity in a Dynamic Regulatory Environment?

by Meghan Anzelc, Ph.D., Christina Fernandes-D’Souza, and Avril Ussery Sisk

Photo of authors

Left to right: Meghan Anzelc, Ph.D., Christina Fernandes-D’Souza, Avril Ussery Sisk (Photos courtesy of authors)

Artificial intelligence (AI) has rapidly leapt to application in an ever-broadening range of human endeavors. We are in a very dynamic era, and as AI becomes more ubiquitous, there is a great deal of on-going discussion about how it will be harnessed for advancement across all aspects of our lives. Coupled with society’s understanding of exciting AI possibilities, there are growing calls for caution, and a reticence regarding placement of trust in private entities to protect the community from threats and potential misuse. There is also the increasing perception of weakness in the governance of AI by the private entities promoting the benefits and rapidly adopting the technology.

The present status of the extant AI regulatory frameworks should be a major interest and focus of corporate board directors. It is imperative that they be cognizant of the regulatory environment in which their organizations operate. Further, it is critical to build resiliency to prepare for the changes in the regulatory environment which are inevitable. To this end, directors must examine and refine their understanding of the duty of care to include a duty to be informed in the context of their application of rapidly changing technology; resources such as the Athena Alliance AI Governance Playbook[1] provide useful starting points. All companies are technology companies, even if they do not define themselves as such. Therefore, corporate boards are having to broaden their understanding of technology as a part of the development of strategy for each organization’s core business. Each company will come to different conclusions about the role that AI plays in their business currently.

To form a robust compliance strategy, boards must remain ready to reevaluate as necessary to determine AI’s role in the company’s future. Boards must ask themselves, “Are we prepared for this challenge?” To paraphrase a well-known adage, “What got you here won’t get you there.” Is the board prepared to ask the questions necessary to successfully strategize toward and take advantage of a vastly different technological landscape? Are board members prepared to provide guidance to the CEO and management team, particularly the General Counsel, to monitor and keep abreast with the relevant regulatory landscapes in multiple jurisdictions?

Board directors should also recognize that shareholders and stakeholders alike are deeply engaged in AI issues, and that this interest is likely to increase over time as more applications and inherent risks for AI in business are identified. Public and governmental interest in corporate AI governance is intensifying. This increased scrutiny necessitates strong AI oversight from board directors. Given AI capabilities cut across geography, industry, and function with the potential to touch nearly all aspects of a company, boards must also be able to consider AI solutions across the full breadth of the organization and understand the potential need for multi-faceted compliance approaches. This atmosphere differs greatly from the regulatory climate of the past. How can corporate boards prepare to operate in this new normal?

Boards can look to highly regulated industries for examples of success. In the United States, state Departments of Insurance have provided regulations around the use of AI tools for underwriting and pricing decisions for decades. Over time, as risk segmentation capabilities have become more sophisticated, regulations have evolved, from use of standard actuarial approaches to the widespread use of generalized linear models (GLMs) to now exploring the use of more sophisticated AI approaches. Companies who have successfully navigated this regulatory landscape have often used a combination of legal teams scanning the horizon for emerging and potential regulation, close partnership between legal and compliance teams with data science and actuarial teams, and proactive partnering with regulators and industry groups to navigate and shape regulations.

At present, there are a variety of AI regulatory regimes established in various jurisdictions; several of these are set to become operative at a future date. This delay in the sunrise of these regulations is intended to give notice to affected industries and to permit the implementation of preliminary compliance functions. In addition, the discussion of regulations, guidance, whether regulations are needed and, if yes, what proposals should be considered is ongoing in jurisdictions that currently do not have AI regulations. The major theme for corporate boards is that AI regulations or guidance are under consideration in many jurisdictions, and directors should expect that they will need to be prepared to adapt and comply in diverse environments where they have operations, conduct business, and where workers live and are employed.

A Brief Survey of the Current AI Regulatory Environment Based on Region

North America

The United States currently does not have a comprehensive federal AI regulation. However, there are federal laws that reference AI, and boards of directors should take note of those that may affect their business sector. Given the current U.S. Congress, it is not anticipated that AI regulation is on the near horizon. Even if it were, the June 28, 2024 Supreme Court ruling in Loper Bright Enterprises et al v Raimondo, Secretary of Commerce et al may create significant delay in federal agencies’ ability to promulgate any regulations, including those relating to AI. Because of this, it is expected that individual states will continue to take the lead in AI regulation, often with an emphasis on data security, privacy and consumer protection.

These include:

Colorado Artificial Intelligence Act (CAIA) (became law on May 17, 2024) which notably emphasizes discrimination and bias in AI and implements a reasonable care standard for both developers and deployers of AI to avoid algorithmic discrimination in consumer settings and employment. This Act will take effect February 1, 2026. CAIA references the National Institute of Standards and Technology’s (NIST’s) “Artificial Intelligence Risk Management Framework” as guidance for business organizations’ risk management. The reasonable care standard can inform business best practices in enterprise risk management in a quickly evolving regulatory environment.

California Consumer Privacy Act (effective January 1, 2020) focuses on consumers’ notice of AI use, effect on the consumer, and opt out provisions relating to automated decision-making technology. This Act is amended by the California Privacy Rights Act of 2020 (CPRA). The California Privacy Protection Agency published final CPRA rules in April 2024.

Connecticut passed a bill, effective July 1, 2025 and beginning February 1, 2026 which adopts the reasonable care standard similar to the CAIA.

In 2023, Texas HB 2060 created the Artificial Intelligence Advisory council to study the use of AI by certain state agencies in Texas.

Effective July 5th, 2023, New York City enacted the Local Law 144 of 2021, which prohibits organizations from using an automated employment decision tool (“AEDT”) in NYC, unless a bias audit has been performed by an independent auditor.  

According to the National Conference of State Legislatures (NCSL) in the 2024 legislative session, at least 40 states, the US Virgin Islands, Puerto Rico and the District of Columbia introduced AI bills, and six states, Puerto Rico and the Virgin Islands adopted resolutions or enacted AI legislation.

Canada’s proposed Artificial Intelligence and Data Act (AIDA) is intended to also protect consumer privacy at the federal level. This bill is not yet law; and there is no provincial legislative action at the present time.

Europe, Middle East and Africa

Among European Union nations, the European Union AI Act EU AI Act (adopted March 13, 2024) became operative on August 1, 2024. The majority of its provisions are enforceable as of August 2, 2026 as comprehensive AI legislation. In addition, some EU countries have sector specific laws or labor law provisions. The General Data Protection Regulation (GDPR), effective from 2018, is a major component of EU privacy and security law and is applicable in the EU and the European Economic Area. Application of these regulations continues to evolve, and board directors should expect to remain informed in order to meet their responsibilities.

Asia-Pacific

China became the first country in 2021 to implement regulations on common AI applications focusing on recommendation systems, quite often ignored in AI governance, and on “deep synthesis,” which is the use of AI to generate synthetic media such as deepfakes. Within months of the explosion of Generative AI models, China released Measures for the Management of Generative Artificial Intelligence Services working off of their earlier provisions on deep synthesis.  In March, 2024, China released its “Artificial Intelligence Law” referenced in this article.

In 2023, the G 7 nations produced the Hiroshima Process International Code of conduct for Organizations Developing Advancing AI Systems providing voluntary guidance for organizations developing the most advanced AI systems. It was intended as a living document building upon the existing OECD Recommendation of the Council on Artificial Intelligence and implementation of OECD AI Principles. These include human rights and democratic values including fairness and privacy, inclusive growth, sustainable development and well-being, and promote innovation and trustworthiness.

India does not have specific laws that directly regulate AI; however, the development and use of AI may be affected by existing laws, such as The Information Technology Act 2000 and The Digital Personal Data Protection Act 2023.

Currently, there are no comprehensive AI regulations in Japan, however the government has established an AI Strategy Council to consider approaches to future regulations. Published on April 19, 2024, the AI Guidelines for Business encourages voluntary risk management.

AI in ASEAN nations is not regulated at this time, but these countries share a voluntary framework to address AI concerns, the ASEAN  Guide on AI Governance and Ethics. It encourages alignment within member jurisdictions and suggests national level and regional level initiatives that governments can consider implementing to foster the development, design and deployment of responsible AI.

Australia published a basic voluntary Australian Framework in 2019, and is currently exploring mandating safeguards around AI. Last year, the Australian government published a discussion paper, Safe and Responsible AI in Australia.

New Zealand published on June 26, 2024, its strategic approach  to work on Artificial Intelligence.  In it, it states, “We already have laws that provide some guardrails; further regulatory intervention should only be considered to unlock innovation or address acute risks and use existing mechanisms in preference to developing a standalone AI Act.”

A number of approaches are employed by various nations, ranging from national frameworks based on business norms, to specific sector guidance, policy development and the application of existing criminal and civil statutes; some include a mix of these approaches to regulate AI. Some countries are flexible regarding their guidance, others less so. The approaches to compliance are also varied, as might be expected.  Many governments are in the process of drafting preliminary plans to formulate their regulatory frameworks, and it is to be expected that there will be some element of AI regulation everywhere sooner rather than later.

How to Have Creativity and Innovation Within Regulatory Compliance

Scoop: Meta won’t offer future multimodal AI models in EU – Meta recently announced that it will withhold releasing its open source[2] “Llama” model in multimodal[3] form, due to the unpredictable nature of the EU regulatory environment. There is concern that Meta may have not complied with the EU’s GDPR (global data protection regulations) when it used publicly available posts to train its models. This decision also prevents any companies operating outside of the EU from offering any products and services which utilize these models. It is unclear whether Meta will apply the same strategy to its already available text-based model which hasn’t been trained on EU Meta data. This highlights the growing uncertainty, especially with US based tech giants and EU regulators when it comes to their operations in the EU markets and their use of EU citizen data.

In such instances, there is an increased opportunity to fund additional research to accelerate domestic technological advancements within their regulatory environments to find unique value propositions in niche areas while building strategic relationships and collaboration with similarly focused regulatory environments.

Preparing Appropriately for Future Regulation

In order to develop effective business strategies, board directors should be knowledgeable regarding the interplay between relevant regulations and the changeable priorities of regulatory authorities to provide effective guidance and oversight. This requires commitment, creativity and innovative thinking in reference to regulatory compliance. See The Ethics and Compliance Initiative Blue Ribbon Commission Report.

Corporate directors must anticipate the changing regulatory environment and align business strategy, corporate purposes and culture, ethics and compliance. Highly regulated companies likely already have considerable experience facilitating compliance and planning for future regulatory compliance. Boards of these firms will need to consider how to guide the organization in bringing along other departments and functions that historically may not have much compliance expertise. For those directors of less regulated organizations, governing AI may be the way the company starts building compliance expertise, and it will be critical for those boards to successfully guide management through this new world.

To promote appropriate AI governance, boards can expand many of their current activities to include AI. The following provides a recommended set of actions:

  • Make certain that regular reporting on the compliance functions are made available to them.
  • Effective communications strategies across all stakeholder groups will go a long way toward mitigating enterprise risks associated with AI.
  • Expand existing ethics and compliance programs to encompass AI, including internal AI development as well as employees using AI, to minimize the potential for unintended misuse of tools.
  • Especially where AI is concerned, board directors must model the adaptability required of the business. A major theme in the regulatory regime at present is how best to determine how AI will be accountable, and where the human element should be maintained. Therefore, deepening knowledge of talent development and human capital capabilities are essential.
  • AI provides an additional lens for boards regarding talent strategy and compliance. How do norms and expectations differ? How will applications of AI technology differ between and among job functions and employees?
  • Boards should remain informed of the General Counsel’s strategies regarding regulatory compliance and how the legal department is monitoring relevant regulatory bodies and legislative committees. What support does the legal department need to better understand and interpret potential regulatory impacts? Not all legal teams have the necessary expertise to identify and understand regulatory implications to the work of data scientists and AI teams.

Boards should consider opportunities to comment on or shape pending AI regulations or legislation and directors should ensure government relations and public affairs strategies are integrated into the comprehensive business strategy. The government relations strategy can help drive compliance strategy when rules are pending, shifting or otherwise uncertain. Through this targeted approach, the business is able to stay nimble and current as regulations in relevant jurisdictions change or are replicated elsewhere. While there is much regarding AI regulations that is yet to be worked out, boards can take concrete steps to better prepare themselves and their organizations to successfully navigate responsible AI oversight.

Footnotes

[1] Disclaimer: two of this article’s authors are co-authors of this playbook.

[2] Open source means that the source code is publicly available, and anyone is free to examine, modify and distribute it.

[3] The model is able to work across text, video, images and audio.

Meghan Anzelc, Ph.D., is President and Chief Data and Analytics Officer at Three Arc Advisory; Christina Fernandes-D’Souza, is Director of Data Science at Three Arc Advisory; and Avril Ussery Sisk is an Independent Director at NextUp Solutions.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).