Category Archives: Biometric Data

CNIL Publishes New Guidelines on the Development of AI Systems

by David Dumont and Tiago Sérgio Cabral

Photos of the authors

David Dumont and Tiago Sérgio Cabral (photos courtesy of Hunton Andrews Kurth LLP)

On June 7, 2024, following a public consultation, the French Data Protection Authority (the “CNIL”) published the final version of its guidelines addressing the development of AI systems from a data protection perspective (the “Guidelines”). Read our blog on the pre-public consultation version of these Guidelines.

In the Guidelines, the CNIL states that, in its view, the successful development of AI systems can be reconciled with the challenges of protecting privacy.

Continue reading

FTC Finalizes Expansion of Health Breach Notification Rule’s Broad Applicability to Unauthorized App Disclosures

by Adam H. Greene and Apurva Dharia

Photos of the authors

Adam H. Greene and Apurva Dharia (photos courtesy of Davis Wright Tremaine LLP)

The FTC issued a final rule to lock in changes to the Health Breach Notification Rule (HBNR) that it proposed in May 2023. While the HBNR began as a breach notification rule seemingly focused on a narrow set of applications that store medical records on behalf of consumers, the final rule continues the FTC’s path toward turning the rule into a means of imposing privacy and breach notification restrictions on virtually all health and wellness apps. Consistent with the FTC’s September 2021 policy statement and recent enforcement actions, the final rule further revises the HBNR to apply to most health and wellness apps and to require breach notification in almost any instance in which a consumer’s identifiable health data is disclosed without their authorization (including unauthorized disclosures to advertising platforms).

The HBNR requires vendors of personal health records (PHRs) and PHR related entities to notify individuals, the FTC, and, in some cases, the media, of a breach of unsecured PHR identifiable health information.[1] It also requires third-party service providers to vendors of PHRs and PHR related entities to provide notification to such vendors and PHR related entities following the discovery of a breach. The rule applies to foreign and domestic non-HIPAA covered vendors of “personal health records that contain individually identifiable health information created or received by health care providers.” The HBNR specifies the timing, method, and content of notification, and in the case of certain breaches involving 500 or more people, requires notice to the media. The final rule will go into effect 60 days after its publication in the Federal Register.

Continue reading

Maryland Legislature Passes State Privacy Bill with Robust Requirements and Broad Threshold for Application

by Marshall Mattera and Amanda Pervine

Photo of the author

Marshall J. Mattera (photo courtesy of Hunton Andrews Kurth)

The Maryland legislature recently passed the Maryland Online Data Privacy Act of 2024 (“MODPA”), which was delivered to Governor Wes Moore for signature and, if enacted, will impose robust requirements with respect to data minimization, the protection of sensitive data, and the processing and sale of minors’ data.

Continue reading

Prohibited AI Practices—A Deep Dive into Article 5 of the European Union’s AI Act

by Dr. Martin Braun, Anne Vallery, and Itsiq Benizri

photo of authors

From left to right: Dr. Martin Braun, Anne Vallery and Itsiq Benizri. (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP).

Article 5 of the AI Act essentially prohibits AI practices that materially distort peoples’ behavior or that raise serious concerns in democratic societies.

As explained in our previous blog post, this is part of the overall risk-based approach taken by the AI Act, which means that different requirements apply in accordance with the level of risk. In total, there are four levels of risk: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations.

Continue reading

Executive Order Prohibits Transfer of Sensitive Personal Data to “Countries of Concern”

by Patrick J. Austin and John Pilch

Photos of authors

From the left to right: Patrick J. Austin and John Pilch

On February 28, 2024, U.S. President Joe Biden issued Executive Order on Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern (EO), which authorizes the U.S. Attorney General to restrict large-scale transfers of personal data to “countries of concern.” The “countries of concern” identified in the EO include China (along with Hong Kong and Macau), Russia, Iran, North Korea, Cuba and Venezuela, according to a summary issued by the White House.

Continue reading

EU AI Act Will Be World’s First Comprehensive AI Law

by Beth Burgin Waller, Patrick J. Austin, and Ross Broudy

Photos of authors

Left to right: Beth Burgin Waller, Patrick J. Austin, and Ross Broudy (photos courtesy of Woods Rogers Vandeventer Black PLC)

On March 13, 2024, the European Union’s parliament formally approved the EU AI Act, making it the world’s first major set of regulatory ground rules to govern generative artificial intelligence (AI) technology. The EU AI Act, after passing final checks and receiving endorsement from the European Council, is expected to become law in spring 2024, likely May or June.

The EU AI Act will have a phased-in approach. For example, regulations governing providers of generative AI systems are expected to go into effect one year after the regulation becomes law, while prohibitions on AI systems posing an “unacceptable risk” to the health, safety, or fundamental rights of the public will go into effect six months after the implementation date. The complete set of regulations in the EU AI Act are expected to be in force by mid-2026.

Continue reading

President Biden Issues Executive Order Granting Authorities to Regulate the Transfer of Sensitive U.S. Data to Countries of National Security Concern

by Eric J. Kadel Jr., Sharon Cohen Levin, Nicole Friedlander, Anthony J. Lewis, Andrew J. DeFilippis, Joshua Spiegel, and George L. McMillan

photos of authors

Top left to right: Eric J. Kadel Jr., Sharon Cohen Levin, Nicole Friedlander, Anthony J. Lewis.
Bottom left to right: Andrew J. DeFilippis, Joshua Spiegel and George L. McMillan. (Photos courtesy of Sullivan & Cromwell LLP).

SUMMARY

On February 28, 2024, President Biden issued Executive Order 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” (the “Executive Order”), delegating new authorities to the U.S. Department of Justice (“DOJ”) and other agencies to regulate the transfer of sensitive U.S. data to countries of national security concern. The Executive Order focuses primarily on personal and other sensitive information, such as U.S. persons’ financial information, biometric data, personal health data, geolocation data, and information relating to government personnel and facilities.[1]

Continue reading

U.S. Cybersecurity and Data Privacy Outlook and Review – 2024

by Alexander H. Southwell and Snezhana Stadnik Tapia

Photos of authors

From left to right: Alexander H. Southwell and Snezhana Stadnik Tapia (photos courtesy of Gibson, Dunn & Crutcher LLP)

As with previous years, the privacy and cybersecurity landscape continued to evolve substantially over the course of 2023. We recently provided a review of some of the most significant developments on this topic in the U.S. in the eleventh edition of Gibson Dunn’s U.S. Cybersecurity and Data Privacy Outlook and Review.

Below we summarize the past year’s developments and future prospects, including the wave of new privacy and cyber legal and regulatory advances at the federal and state levels. This past year, states continued to take the lead on enacting privacy legislation and branches of the federal government focused on data security, sensitive data, and artificial intelligence (“AI”). The surge of civil litigation with respect to web-tracking technologies also endured. In 2024, we expect an amplified focus on privacy and cybersecurity issues, as well as with respect to emerging technologies such as AI, to continue.

Continue reading

New Jersey Governor Signs Comprehensive Privacy Law

by Nancy Libin, David L. Rice, John D. Seiver, and Benjamin Robbins

Photos of the authors.

From left to right: Nancy Libin, David L. Rice, John D. Seiver, and Benjamin Robbins. (Photos courtesy of Davis Wright Tremaine LLP)

On January 16, 2024, New Jersey Governor Phil Murphy signed into law Senate Bill 322 (“the Act”), making New Jersey the fourteenth state to enact a comprehensive consumer data privacy law, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Florida, Texas, Oregon, and Delaware.  The Act will take effect on January 16, 2025.

Continue reading

Coming Face to Face with Rite Aid’s Allegedly Unfair Use of Facial Recognition Technology

by Lesley Fair

(Photo courtesy of the author)

Rite Aid has “used facial recognition technology in its retail stores without taking reasonable steps to address the risks that its deployment of such technology was likely to result in harm to consumers as a result of false-positive facial recognition match alerts.” That’s the lawyerly language of the FTC’s just-filed action against drug store chain Rite Aid and a subsidiary. Put in more common parlance, the FTC alleges that Rite Aid launched an inadequately tested and operationally deficient covert surveillance program against its customers without considering the impact that its inaccurate facial recognition technology would have on people wrongly identified as “matching” someone on the company’s watchlist database. Among other things, a proposed settlement in the case would ban Rite Aid from using any facial recognition system for security or surveillance purposes for five years.

Continue reading