Legal Risks of Using AI Voice Analytics for Customer Service

by Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

Photos of the authors

From left to right: Avi Gesser, Johanna Skrzypczyk, Robert Maddox, Anna Gressel, Martha Hirst, and Kyle Kysela

There is a growing trend among customer-facing businesses towards using artificial intelligence (“AI”) to analyze voice data on customer calls. Companies are using these tools for various purposes including identity verification, targeted marketing, fraud detection, cost savings, and improved customer service. For example, AI voice analytics can detect whether a customer is very upset, and therefore should be promptly connected with an experienced customer service representative, or whether the person on the phone is not really the person they purport to be. These tools can also be used to assist customer service representatives in deescalating calls with upset customers by making real-time suggestions of phrases to use that only the customer service representative can hear, as well as evaluate the employee’s performance in dealing with a difficult customer (e.g., did the employee raise her voice, did she manage to get the customer to stop raising his voice, etc.).

Some of the more novel and controversial uses for AI voice analytics in customer service include (1) detecting whether a customer is being dishonest, (2) inferring a customer’s race, gender, or ethnicity, and (3) assessing when certain kinds of customers with particular concerns purchase certain goods or services, and developing a corresponding targeted marketing strategy.  

Legal Considerations When Using AI Voice Analytics

As the market for voice analytics grows and the applications proliferate, so do litigation risks and regulatory scrutiny. For example, the Data Protection Authority in Hungary recently fined a bank close to €700,000 for using AI voice analytics in its call centers. The investigation cited a lack of consent or other legal basis for using AI voice analytics, inadequate security safeguards, and a failure to notify affected data subjects of their rights. The use of AI to analyze customer calls can implicate several different legal issues.

Transparency, Rights, and Consent Requirements under Privacy Laws: Privacy laws, such as the California Consumer Privacy Act (as amended by the California Privacy Rights Act that went into effect on January 1, 2023, (“CCPA”)) in California, other state privacy laws coming online in 2023, and the (UK) GDPR, require companies to notify data subjects of the purposes for which they are using data. Some uses of AI voice analytics may not be covered by existing privacy notices. Also, depending on the data collected and the use case, these privacy laws may require consent from the customers, mandate that customers be provided with opt-out rights or the right to limit the use of that data, and/or limit the ability of the company to share personal data with third parties, which may include a vendor who is providing voice analytic services. Additionally, data subjects in these jurisdictions will likely have rights with respect to this data—such as the right to access and deletion.

Biometric Laws: Some uses of voice analytics involve tagging the voices of certain customers so that they can be identified the next time they call, which can be helpful for marketing, authentication, and fraud detection. Because these use cases involve matching a voice recording to a particular person, they may be creating “voiceprints” and subject to various biometric laws, such as Illinois’ Biometric Information Privacy Act (“BIPA”). These laws require that customers be notified that their voiceprints are being collected and provide their express consent for such collection. BIPA also provides a private right of action for violations, with penalties ranging from $1000-$5000 per violation, which can result in very significant liabilities for companies. And, as noted above, some comprehensive privacy regimes require (explicit) consent for the processing of biometric identifiers. The UK Information Commissioner’s Office, for example, took enforcement action against the UK tax authority, Her Majesty’s Revenue and Customs, for failing to obtain valid consent when using voice authentication to verify callers.

Cybersecurity Laws: If customer voice data is being analyzed and stored, the company should assess the data security safeguards that are in place to protect that data and whether they meet applicable legal requirements. If the voice data is also being shared with third-party vendors, the company should conduct cybersecurity diligence on the vendor and consider what representations and contractual provisions are necessary to ensure legal compliance. Aside from the regulatory, reputational, and commercial risks associated with not securing customer voice data, the CCPA provides a private right of action for failure to implement reasonable security measures. Individuals also have private rights of action under the GDPR and many other global privacy laws.

Employee Monitoring Statutes: To the extent that the voice analytics are being used for employee training purposes or to assess employee performance, they may also fall under one or more of the employee monitoring statutes, which require companies in certain jurisdictions to notify their employees when they are monitoring their activity.

Data Retention: Companies should assess whether the information being generated from their AI voice analytics is being saved and, if so, for how long. Several privacy and cybersecurity laws, including the New York Shield Act, GDPR, and BIPA, specify that (sensitive) personal information should not be stored longer than is necessary to achieve the (legitimate business) purpose it was collected for, unless there is some other legal or regulatory requirement to keep it. Some privacy laws, like the GDPR and CCPA, also require companies to communicate the retention period to data subjects.

Anti-discrimination Laws: To the extent that the use of AI voice analytics is impacting important decisions about customers or employees based on tones in their voices (e.g., to assess how upset a customer is or how effective a customer-service representative is at calming down an upset customer), companies should consider whether these tools have been tested to make sure that they have similar results regardless of race, gender, age, and ethnicity. If the tools have not been trained using a wide variety of speech types, there is risk that one or more protected classes will be treated worse than other groups based on their speech patterns or other cultural attributes. For this reason, UK regulators have highlighted the difficulties in ensuring that these kinds of voice analytics are processed in a GDPR-compliant way, especially when the individual is providing the company with sensitive personal data subconsciously.

Automated Decision-Making (ADM) Laws: Companies should consider whether decisions are being made based solely on the voice analytic tools, such that they might violate ADM provisions of the GDPR, Brazil Privacy Act, and other ADM laws discussed in our previous blog post on ADM laws. For example, if an employee were denied a promotion because the voice analytics determined that he was too loud or aggressive with customers, when in fact, it was background noise or the quality of his equipment that accounted for his substandard evaluation, and no human had reviewed the relevant calls, that may be contrary to ADM laws.

Lie Detector Laws: The application of lie detector laws to certain kinds of voice analytics is currently the subject of litigation in California. These suits allege that software used to verify callers’ identities (which included asking customers to confirm their names) violates a prohibition in the California Invasion of Privacy Act (“CIPA”) on using voice analysis to determine the truth of statements without express written consent. Whether these claims will have any success remains to be seen, but it is worth noting that, unlike the CCPA, CIPA does not have a carve-out for data that is regulated by the GLBA.

Ways to Reduce Risk

In light of these regulatory and litigation risks, companies using AI voice analytics for customer calls should consider the following ways to reduce both legal and reputational risk:

  • Assessing Which Laws Apply: Many U.S. privacy laws, including BIPA and other U.S. state privacy laws, have broad carve-outs that shield financial institutions regulated by the GLBA or data governed by the GLBA from consumer privacy requirements. Knowing the jurisdictions in which voice analytics are used, and which laws apply, are important steps in reducing potential liability. Businesses should also keep in mind that many non-U.S. privacy laws have extraterritorial effect.
  • Updating Privacy Documents: Many of the legal and reputational risks associated with voice analytics can be reduced though enhanced notice. Companies should therefore review their privacy notices and policies and consider updating them to the extent that voice analytics uses are not fully disclosed to potentially affected customers or employees.
  • Obtaining Consent for High-Risk Uses: The level of consent needed to collect and use voice data for certain high-risk purposes is currently being tested by plaintiffs and regulators, and expanded by new legislation and legislative proposals in the U.S. If a company is currently using voice analytics for any of these high-risk purposes, it should consider the pros and cons of obtaining customer consent, until the relevant issues are resolved through court decisions, new laws, or clear regulatory guidance. Internationally, and in some states in the U.S., there may be no choice but to obtain consent.
  • Avoiding Creating Voiceprints: Many of the benefits of voice analytics (e.g., assessing if a customer is very upset) can be obtained without making a voiceprint or other means that allows for identification of an individual based on their voice. Creating and storing voiceprints triggers several additional regulatory obligations under various biometric and privacy laws, and therefore should be avoided if not necessary for the objective of the particular voice analytics project.
  • Not Sharing Data with Third Parties: Several privacy and cybersecurity laws require additional notice, consent, and security requirements when companies share sensitive customer data, such as voiceprints, with third parties. Therefore, such sharing should be avoided if not necessary.
  • Not Using Data for Evaluations: Many voice analytics applications are used to train and coach customer service representatives. These applications take on significant additional regulatory compliance obligations and risks if they are also used in employee evaluations. Therefore, companies should consider whether such additional uses are worth the regulatory and reputational risk, especially without meaningful human oversight.
  • Assess Risks Associated with Ethnic and Racial Differences: To the extent that AI voice analytics are used to determine who receives certain benefits (e.g., whether a customer should get a certain discount, or whether an employee should be promoted), careful consideration should be given to whether these tools have been tested to make sure that they do not treat people differently based on race, gender, age, ethnicity, or disability.

Avi Gesser is a partner, Johanna Skrzypczyk is counsel, Robert Maddox, is international counsel, and Anna Gressel, Martha Hirst, and Kyle Kysela are associates at Debevoise and Plimpton LLP.  This post originally appeared in the firm’s Data Blog.

The views, opinions and positions expressed within all posts are those of the author(s) alone and do not represent those of the Program on Corporate Compliance and Enforcement (PCCE) or of the New York University School of Law. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this site and will not be liable any errors, omissions or representations. The copyright or this content belongs to the author(s) and any liability with regards to infringement of intellectual property rights remains with the author(s).