Category Archives: Biometric Data

From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

by

Photos of the authors.

Top left to right: Marianna Drake, Marty Hansen, and Lisa Peets. Bottom left to right: Will Capstick, Jayne Ponder, and Yaron Dori. (Photos courtesy of Covington & Burling LLP)

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.

Continue reading

Consumers Are Voicing Concerns About AI

by Simon Fondrie-Teitler and Amritha Jayanti

Federal Trade Commission

This blog is part of a series authored by the FTC’s Office of Technology focused on emerging technologies and consumer and market risks, with a look across the layers of technology—from data and infrastructure to applications and design of digital systems.

Over the last several years, artificial intelligence (AI)—a term which can refer to a broad variety of technologies, as a previous FTC blog notes—has attracted an enormous amount of market and media attention. That’s in part because the potential of AI is exciting: there are opportunities for public progress by enhancing human capacity to integrate, analyze, and leverage information. But it’s also, perhaps in larger part, because the introduction of AI presents new layers of uncertainty and risk. The technology is altering the market landscape, with companies moving to provide and leverage essential inputs of AI systems, such as data and hardware – opening a window of opportunity for companies to potentially seize outsized power in this technology domain. AI is also fundamentally shifting the way we operate; it’s lurking behind the scenes (or, in some cases, operating right in our faces) and changing the mechanics by which we go about our daily lives. That can be unsettling, especially when the harms brought about by that change are tangible and felt by everyday consumers.

Continue reading

Protecting the Privacy of Health Information: A Baker’s Dozen of Takeaways from FTC Cases

by Elisa Jillson

Photo of the author

Photo courtesy of the author

In the past few months, the FTC has announced case after case involving consumers’ sensitive health data, alleging violations of both Section 5 of the FTC Act and the FTC’s Health Breach Notification Rule. The privacy of health information is top of mind for consumers – and so it’s top of mind for the FTC. Companies collecting or using health data, listen up. There are a number of key messages from BetterHelpGoodRxPremomVitagene, and other FTC matters that you need to hear.

Continue reading

U.S. District Court Holds That BIPA’s Liquidated Damages Are Discretionary

by David RiceJohn Seiver, and Sarah Wood

Photos of the authors

Left to right: David Rice, John Seiver, and Sarah Wood (photos courtesy of the authors)

Decision may provide relief from the specter of “ruinous” damage verdicts stemming from the Illinois Biometric Information Privacy Act.

The U.S. District Court for the Northern District of Illinois, Eastern Division, issued an order on June 30, 2023, that may substantially alter the risk exposure for entities sued for violations of the Illinois Biometric Information Privacy Act (“BIPA”), currently the most stringent of the state biometric privacy laws and the only one with a private right of action.[1] The Court held that statutory damages under BIPA are discretionary rather than fixed in amount for each violation. In doing so, the Court vacated a damages award of $228 million and set the case for a new jury trial limited to the issue of damages. While this ruling provides some potential relief for BIPA defendants, we also want to highlight that BIPA is not the only biometric privacy law despite the attention given to it, and companies need to be mindful of other state laws focused on collecting, processing, or disclosing biometric data.

Continue reading

Oregon Consumer Privacy Act Signed Into Law

by Nancy Libin, Michael T. Borgia, John D. Seiver, David L. Rice, and Patrick J. Austin

Photos of the authors

Left to right: Nancy Libin, Michael T. Borgia, John D. Seiver, David L. Rice, and Patrick J. Austin (photos courtesy of Davis Wright Tremaine LLP)

Oregon becomes the 12th state with a comprehensive consumer data privacy law

The Oregon Consumer Privacy Act (OCPA) became law on July 18, 2023. Oregon is the twelfth state to enact a comprehensive consumer data privacy law, joining CaliforniaVirginiaColoradoConnecticutUtahIowaIndianaTennesseeMontanaFlorida, and Texas. The OCPA goes into effect July 1, 2024 (the same date as the recently enacted privacy laws in Texas and Florida). The effective date for non-profits—which, unlike under most other state privacy laws, are not exempt under the OCPA—is delayed until July 1, 2025.

Continue reading

The Top Eight AI Adoption Failures and How to Avoid Them

by Avi Gesser, Matt Kelly, Samuel J. Allaman, Michelle H. Bao, Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp

Photos of the authors

Top left to right: Avi Gesser, Matt Kelly, Samuel J. Allaman, and Michelle H. Bao.
Bottom left to right: Anna R. Gressel, Michael Pizzi, Lex Gaillard, and Cameron Sharp.
(Photos courtesy of Debevoise & Plimpton LLP)

Over the past three years, we have observed many companies in a wide range of sectors adopt Artificial Intelligence (“AI”) applications for a host of promising use cases. In some instances, however, those efforts have ended up being less valuable than anticipated—and in a few cases, were abandoned altogether—because certain risks associated with adopting AI were not properly considered or addressed before or during implementation. These risks include issues related to cybersecurity, privacy, contracting, intellectual property, data quality, business continuity, disclosure, and fairness.

In this Debevoise Data Blog post, we examine how the manifestation of these risks can lead to AI adoption “failure” and identify ways companies can mitigate these risks to achieve their goals when implementing AI applications.

Continue reading

Not Home Alone: FTC Says Ring’s Lax Practices Led to Disturbing Violations of Users’ Privacy and Security

by Lesley Fair

Photo of the author

Lesley Fair (photo courtesy of author)

Many consumers who use video doorbell and security cameras want to detect intruders invading the privacy of their homes. Consumers who installed Ring may be surprised to learn that according to a proposed FTC settlement, one “intruder” that was invading their privacy was Ring itself. The FTC says Ring gave its employees and hundreds of Ukraine-based third-party contractors up-close-and-personal video access into customers’ bedrooms, their kids’ bedrooms, and other highly personal spaces – including the ability to download, view, and share those videos at will. And that’s not all Ring was up to. In addition to a $5.8 million financial settlement, the proposed order in the case contains provisions at the intersection of artificial intelligence, biometric data, and personal privacy. It’s an instructive bookend to another major biometric privacy case the FTC announced today, Amazon Alexa

Continue reading

FTC Articulates Consumer Privacy Concerns – Potential Misuse of Biometric Information and Technologies

by Apurva Dharia, Nancy Libin, John D. Seiver, and Kate Berry

Photos of the authors

From left to right: Apurva Dharia, Nancy Libin, John D. Seiver, and Kate Berry
(Photos courtesy of Davis Wright Tremaine LLP and authors)

Policy statement addresses possible bias and discrimination in the collection, use, and marketing of biometrics under the FTC Act.

On May 18, 2023, the Federal Trade Commission (FTC) issued a policy statement warning that the proliferation of technologies that use or claim to use biometric information may bring risks with regard to consumer privacy and data security and present a potential for bias and discrimination. The Agency vowed to use its authority under Section 5 of the FTC Act to investigate unfair or deceptive acts in the collection, use, and marketing of biometric information technologies that mislead or cause harm to businesses and consumers.

Continue reading