
Photo courtesy of the author
For many years, regulatory uncertainty in the United States has been part of the landscape for innovators, particularly with the rise of emerging technologies such as cryptocurrencies, blockchain, and artificial intelligence. It can, unfortunately, thwart the progress of responsible innovation and place our innovators at a competitive disadvantage.
We recently have seen a dramatic example of regulatory uncertainty in the artificial intelligence space.
On October 30, 2023, President Biden signed Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order emphasized the importance of governing AI development and promoting responsible use to harness its benefits while mitigating substantial risks. It highlighted eight guiding principles and priorities, including ensuring AI safety and security, promoting competition in the AI industry, preventing AI-enabled threats to civil liberties and national security, and maintaining U.S. global competitiveness in the AI field. The Executive Order also required major federal agencies to create dedicated “chief artificial intelligence officer” positions within their organizations.
Presumably, several artificial intelligence firms have taken steps to follow the guidance set forth in President Biden’s Executive Order. However, on January 20, 2025, that Executive Order, together with all its requirements, was rescinded by President Donald Trump within hours of his assuming office.
With the stroke of a pen, everything changed.
How can innovators mitigate the risk of regulatory uncertainty?
Regulatory uncertainty is not the innovator’s friend. It can put sand in the gears of the innovation process and create headwinds to progress.
But by no means are innovators dead in the water.
We can still move forward productively in many cases despite regulatory uncertainty. While the community must follow whatever guidance and regulatory rules are promulgated, the most helpful course for the AI community to follow would be to focus on key fundamentals: the adherence to timeless principles of good governance, responsible oversight, prudent risk management, workable policies and procedures, and regular testing.
At the end of the day, whether one is working on AI or any emerging technology, there needs to be a continued focus on timeless principles of best practices that focus on outcomes that reflect the best interest of the marketplace.
To guide us, there are time-tested concepts that promote trust and accountability, and which, upon reflection, also tend to mirror what most laws, regulations, and best practices have called for over time.
Happily, these concepts are likely highly familiar to responsible innovators.
We already know that we must abide by the expressed or implied contracts and promises when we enter a relationship with a customer.
We already know that we should be accountable under tort law if our use cases pose a risk of causing harm or loss.
We already know that we must assess the risks associated with our use cases and ensure that standards are clearly articulated.
We already know that we should scan the marketplace for lessons learned from the misfortunes of other players who have experienced failure or crossed the line.
And we already know, intuitively, how to do the right thing for our customers, stakeholders, and the public.
We can also take valuable lessons from the experiences of different jurisdictions, both foreign and domestic, which provide guidance in identifying emerging perspectives on what constitutes a prudent practice. For example, the EU AI Act focuses on hierarchies of use cases, highlighting practices that categorize levels of risk as unacceptable, high, or manageable. Additionally, several U.S. states are considering various bills that, in most instances, provide a framework for identifying potential regulatory concerns. Such feedback can help guide innovators toward practices that are likely to align with regulatory expectations.
Responsible AI firms understand the criticality of adhering to principles that lead to good outcomes. They are familiar with policies and protocols for recognizing issues related to data quality, the need for ongoing model testing, sensitivity to drift and bias, and the prudent reliance on humans in the loop.
At the end of the day, there is no doubt that AI innovators must remain vigilant to the signals sent by our regulatory friends. However, perhaps more importantly, the hallmarks of foundational excellence in AI adoption will be grounded in the commitment of responsible players—those well-versed in common longstanding legal and commercial concepts and accountable for principles of business ethics, risk management, sound policies, and continuous testing.
Charles V. Senatore is a PCCE Senior Fellow, a current board member, a former SEC regional director, and a former federal prosecutor.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).