Technological innovations such as generative artificial intelligence (AI), have come under increasing scrutiny from regulators in the U.S., the European Union, and beyond. This heightened oversight aims to ensure that companies implement strong privacy, safety, and design safeguards to protect users, and secure the data used in training advanced AI models. Some of these regulations have already or will soon come into effect. The European Union’s AI Act is expected to take effect in the second half of 2024, requiring firms to comply with regulations based on the risk level of their AI systems, including obligations for transparency, data governance, human oversight, and risk management for high-risk AI applications. Within the U.S., several states have enacted laws requiring app providers to verify users’ ages and regulate AI to protect users, especially children. At the federal level, proposed legislation like the Kids Online Safety Act (KOSA) and the American Data Privacy Protection Act (ADPPA) seeks to establish national standards for youth safety, data privacy, age verification, and AI transparency on digital platforms.
For many firms, these regulatory shifts have necessitated a complete reevaluation of their compliance strategies. Meta is a fresh example of how businesses may be navigating this evolving landscape. At their “Global Innovation and Policy” event on October 16 and 17, which gathered academics, technology leaders, and policy experts, Meta executives outlined their expanded compliance strategy. This strategy now extends beyond privacy concerns to tackle broader regulatory challenges, such as AI governance, youth protection, and content moderation.
Following the 2019 FTC consent decree that required Meta to overhaul its privacy practices due to allegations of mishandling user data, the company focused on strengthening its privacy design and compliance efforts. Since then, emerging laws in AI regulation, data security, and content moderation have introduced new challenges, often intersecting with existing privacy requirements. Recognizing this overlap, Meta adopted a more unified strategy, as regulations like the Digital Services Act (DSA) and the EU AI Act increasingly link data privacy with AI governance, thus pushing companies toward a more integrated compliance approach. Privacy compliance alone no longer makes the cut.
At present, Meta has broadened its accountability programs beyond privacy, using lessons learned from its privacy framework to address compliance in areas like youth regulation and intellectual property, in two main ways. The first has focused on expanding its “regulatory readiness process,” which is designed to increase compliance and mitigate risk. Originally focused on privacy, this process is designed to anticipate new laws, legal initiatives, and enforcement actions in areas beyond privacy to assess risks related to content moderation and intellectual property, among others, ensuring a more thorough compliance approach. The second has focused on amplifying its “risk review process,” which morphed from a privacy risk assessment to a multi-risk assessment at every stage of product and technology development.
Meta’s strategic shift highlights the growing need for global companies to navigate the increasingly complex and fragmented regulatory landscape. To address this, Meta has significantly invested in automating compliance processes, dedicating hundreds of engineers to develop sophisticated systems that streamline regulatory adherence, particularly in areas such as privacy and broader compliance requirements. By automating these processes, global firms can more effectively manage regulatory complexities at scale, enabling them to focus on innovation in fields like generative AI and targeted advertising.
However, the high costs associated with developing and maintaining these advanced compliance systems present a substantial hurdle for smaller companies, which often lack the resources to implement such solutions. This creates a growing divide within the industry, where larger corporations with the financial means to automate compliance can adapt more easily to complex regulatory environments, while smaller and emerging firms face increasing difficulties in keeping up.
Meta’s case underscores the growing influence of regulatory compliance on business operations and product development from the outset. Integrating compliance into the product development lifecycle is becoming essential not only to ensure product safety but also to support innovation, particularly in emerging fields like AI, where regulatory frameworks and product strategies are increasingly intertwined. Such substantial investment in compliance infrastructure reflects a broader industry trend: regulatory adherence is no longer a peripheral concern but is evolving into a core business function, shaping both the direction and design of new products. This shift suggests that for companies focused on technological innovation involving data and AI, the ability to innovate effectively will depend on seamlessly embedding compliance into the foundation of their product development processes.
The growing regulatory demands in areas like AI governance, privacy, and content moderation are reshaping the competitive landscape, with larger firms like Meta well-positioned to adapt due to their ability to invest heavily in compliance infrastructure and automation. While these investments enable companies to meet evolving legal requirements and continue innovating, they create a widening gap between industry giants and smaller firms. For startups and smaller businesses, the escalating costs of compliance may prove prohibitive, limiting their ability to compete. As a result, the current regulatory landscape risks stifling competition, consolidating power among a few dominant players, and potentially slowing down innovation from smaller, more agile companies that may struggle to keep pace with the rising compliance standards.
Florencia Marotta-Wurgler is the Boxer Family Professor of Law at New York University School of Law.
The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright of this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).