FTC Articulates Consumer Privacy Concerns – Potential Misuse of Biometric Information and Technologies

by Apurva Dharia, Nancy Libin, John D. Seiver, and Kate Berry

Photos of the authors

From left to right: Apurva Dharia, Nancy Libin, John D. Seiver, and Kate Berry
(Photos courtesy of Davis Wright Tremaine LLP and authors)

Policy statement addresses possible bias and discrimination in the collection, use, and marketing of biometrics under the FTC Act.

On May 18, 2023, the Federal Trade Commission (FTC) issued a policy statement warning that the proliferation of technologies that use or claim to use biometric information may bring risks with regard to consumer privacy and data security and present a potential for bias and discrimination. The Agency vowed to use its authority under Section 5 of the FTC Act to investigate unfair or deceptive acts in the collection, use, and marketing of biometric information technologies that mislead or cause harm to businesses and consumers.

In its policy statement, the FTC defined biometric information technologies to include all technologies that use or purport to use biometric information for any purpose. Specifically, the FTC stated that:

Biometric information includes, but is not limited to, depictions, images, descriptions, or recordings of an individual’s facial features, iris or retina, finger or handprints, voice, genetics, or characteristic movements or gestures (e.g., gait or typing pattern). Biometric information also includes data derived from such depictions, images, descriptions, or recordings, to the extent that it would be reasonably possible to identify the person from whose information the data had been derived.

The FTC has taken a position in line with Washington’s recent “My Health My Data Act” and the California Consumer Privacy Act (CCPA) by including but not limiting itself to a number of biometric identifiers “to refer to the broader category of all technologies that use or purport to use biometric information for any purpose, that will enable the FTC to cast a wide net for enforcement. The FTC cites the rapid advancement of biometric technology over the past decade and growing public scrutiny of the risks biometric technologies may pose to consumers, businesses, and society as the backdrop for increased scrutiny of companies collecting and using biometric information or marketing or using biometric information technologies to ensure compliance with Section 5 of the FTC Act.

Concerns and Risks Related to Biometric Information Technologies

The FTC cites the potential for fraud, privacy and security risks, bias, and discrimination among the examples of concerns it is evaluating:

  • Counterfeit Videos and Voice Recordings (“Deepfakes”) that would allow bad actors to convincingly impersonate individuals in order to commit fraud or to defame or harass the individuals depicted.
  • Security of large databases of biometric information that may be attractive targets for malicious actors because of the information’s potential to be used for other illicit purposes, including to achieve further unauthorized access to devices, facilities or data.
  • Use of biometric information technologies to identify consumers in certain locations could reveal sensitive personal information about them—for example, that they have accessed particular types of healthcare, attended religious services, or attended political or union meetings.
  • Use of error-prone biometric technologies to determine whether consumers can receive important benefits and opportunities or are subject to penalties or less desirable outcomes as published studies have found that many facial recognition algorithms produce significant errors for different racial groups, ages, and genders.

Regulating Unfair and Deceptive Acts Under Section 5 of the FTC Act

Unfairness

The FTC states that collecting, retaining, or using consumers’ personal information in ways that cause or are likely to cause substantial injury, or disseminating technology that enables others to do so without taking reasonable measures to prevent harm to consumers, can be an unfair practice in violation of Section 5 of the FTC Act. It previously charged businesses for not clearly and conspicuously disclosing to consumers when their biometric data is collected and used, or if access to essential goods and services is conditioned upon a consumer providing the information. The FTC cites complaints naming businesses that have engaged in unfair practices by:

  • Failing to protect consumers’ personal information using reasonable data security practices;
  • Engaging in invasive surveillance, tracking, or collection of sensitive personal information that was concealed from consumers or contrary to their expectations. The FTC includes past complaints against Lenovo, Inc.[1] and Vizio, Inc.[2] as examples of failure to provide adequate notice or obtain consumers’ informed consent prior to collecting or allowing third parties to collect consumers’ personal information using reasonable data security practices.
  • Implementing privacy-invasive default settings;[3]
  • Disseminating an inaccurate technology that, if relied on by consumers, could endanger them or others.[4]
  • Offering for sale technologies with the potential to cause or facilitate harmful and illegal conduct like covert tracking, and failing to take reasonable measures to prevent such conduct.[5]

To prevent unfairness, the FTC warns that businesses should implement reasonable privacy and data security measures to ensure that any biometric information that they collect or maintain is protected from unauthorized access—whether that access stems from an external cybersecurity intrusion or an internal incursion by unauthorized employees, contractors, or service providers.

Deception

The FTC also cites two main categories of deception which it will scrutinize:

  • False or unsubstantiated marketing claims relating to the validity, reliability, accuracy, performance, fairness, or efficacy of technologies using biometric information;
  • Deceptive statements about the collection and use of biometric information.

Examples of false or unsubstantiated marketing claims include the FTC’s action against Aura Labs, Inc., where the FTC alleged that the company’s representations that its mobile application measured blood pressure with accuracy comparable to a traditional blood pressure cuff were false, misleading, or unsubstantiated. In addition, representations about biometric information technologies must have a reasonable basis, and claims of validity or accuracy are deceptive if they are true only for certain populations where such limitations are not clearly stated.[6]

Finally, the FTC states that false or misleading statements about the collection and use of biometric information constitute deceptive acts in violation of Section 5 of the FTC Act, as does failing to disclose any material information needed to make a representation non-misleading. It cites its complaint against Everalbum, Inc., alleging that the company misrepresented that it was not using face recognition unless the user enabled it, as an example of action it has taken against businesses that it charged with engaging in deceptive practices related to the collection and use of biometric information. The complaint resulted in an FTC order forcing the deletion of data, models, and algorithms developed by using data for which there was not an express consent to use.

To avoid engaging in deceptive acts, the FTC warns that businesses should:

  • Not make false statements about the extent to which they collect or use biometric information or whether or how they implement technologies using biometric information;
  • Ensure that they are not telling half-truths, such as making an affirmative statement about some purposes for which it will use biometric information but failing to disclose other material uses of the information.

Best Practices for Using Biometric Information Technologies

The FTC states that it will conduct a holistic assessment of a business’s relevant practices in making a determination as to whether the business’s use of biometric information or biometric information technology violates Section 5 of the FTC Act. At a minimum, the FTC will take into account the following factors in making a determination:

  1. Failing to assess foreseeable harms to consumers before collecting biometric information. Businesses should conduct impact assessments of the potential risks to consumers associated with the collection, storage, and/or use of biometric information technologies.
  2. Failing to promptly address known or foreseeable risks. Businesses should proactively identify and implement readily available tools for reducing and eliminating risks. Examples include:
    1. Taking steps to reduce biases or to eliminate potential for discrimination caused by known biases;
    2. Limiting access to biometric information;
    3. Protecting consumers’ personal information from unauthorized access using reasonable data security practices;
    4. Timely updating algorithms and hardware components of systems that are used to process biometric information; and
    5. Abandoning technology known to have high error rates or biases, even if more convenient or efficient.
  3. Engaging in surreptitious and unexpected collection or use of biometric information. Businesses should ensure they provide consumers with adequate notice regarding the collection and use of biometric information. Businesses should also consider implementing a mechanism for accepting and addressing consumer complaints and disputes related to businesses’ use of biometric information technologies.
  4. Failing to evaluate the practices and capabilities of employees and third parties that have access to biometric information. The FTC reminds businesses to seek relevant assurances and contractual agreements that require third parties to take appropriate steps to minimize risks to consumers and to audit those third parties for compliance. Businesses should provide appropriate training for employees and contractors that process biometric information.
  5. Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information. Businesses must ensure technologies are acting as anticipated and that their use does cause consumer injury and cease such practices if they do.

Footnotes

[1] The FTC alleged that Lenovo’s preinstallation of ad-injecting software, without adequate notice or informed consent, acted as a “man-in-the-middle” between consumers and all websites with which they communicated, was unfair, and that failure to take reasonable measures to assess and address security risks created by the preinstalled software was also unfair.

[2] The FTC alleged that Vizio’s collection of sensitive television viewing activity without consent and contrary to consumer expectations, and sharing of such information with third parties, were unfair practices.

[3] See FTC v. Frostwire LLC, Case No. 111-cv-23643 (S.D. Fla. Oct. 11, 2011) (Complaint alleged that distributing an application with default settings that caused or were likely to cause consumers to unwittingly publicly share files already present on, or subsequently saved on, the consumers’ mobile devices, including, among others, consumers’ pictures, videos, and documents, was an unfair practice).

[4] See FTC v. Breathometer, Inc., No. 3:17-cv-314 (N.D. Cal. Jan. 23, 2017) (Complaint alleged that failing to notify consumers or to take corrective action upon learning that device measuring blood alcohol levels was inaccurate was an unfair practice).

[5] See In re Designer Ware, LLC, FTC File No. 1123151 (Apr. 11, 2013) (Complaint alleged that furnishing rent-to-own stores with monitoring and tracking software to be installed on rented computers was an unfair practice).

[6] See In re Everalbum, FTC File No. 1923172 (May 6, 2021) (Complaint alleged that company’s representations that it was not using facial recognition unless user enabled it were deceptive, where the representations were true only for users in Texas, Illinois, Washington, and the European Union, and users outside of those locations were not provided a setting to turn off facial recognition). 

Nancy Libin is a Partner, John D. Seiver is Of Counsel, and Kate Berry and Apurva Dharia are Associates at Davis Wright Tremaine LLP. This post originally appeared on the firm’s blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).