Dev

Consumer Reports calls out slapdash AI voice-cloning safeguards


Four out of six companies offering AI voice cloning software fail to provide meaningful safeguards against the misuse of their products, according to research conducted by Consumer Reports.

The nonprofit publication evaluated the AI voice cloning services from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. It found ElevenLabs, Speechify, PlayHT, and Lovo “required only that researchers check a box confirming that they had the legal right to clone the voice or make a similar self-attestation.”

To establish an account, Speechify, Lovo, PlayHT, and Descript only required users to provide a name and email address.

“I actually think there’s a good argument that can be made that what some of these companies are offering runs afoul of existing consumer protection laws,” said Grace Gedye, citing Section 5 of the FTC Act and various state laws.

Gedye, a policy analyst at Consumer Reports and author of the AI voice cloning report [PDF], acknowledged that open source voice cloning software complicates matters, but said that even so, it’s worthwhile to try to encourage American companies to do a better job protecting consumers.

Descript, ElevenLabs, Speechify, PlayHT, and Lovo did not immediately respond to requests for comment.

Several of these firms defended their business practices in response to questions posed by Consumer Reports in November 2024.

Speech synthesis has been the focus of research for decades, but only recently, thanks to advances in machine learning, has voice cloning become convincing, easy to use, and widely accessible.

The software has a variety of legitimate uses such as generating narration for audio books, enabling speech from those unable to speak, and customer support, to the extent customers tolerate it. But it can also be easily misused.

Lyrebird was the canary in the coal mine. In 2017, the Canada-based startup (since acquired by Descript) released audio clips featuring the voices of Donald Trump, Barack Obama, and Hillary Clinton, saying things they hadn’t actually said.

It was a proof of concept for what’s now a real problem – reproducing other people’s voices for deceptive purposes, or audio deepfakes.

According to a 2023 US Federal Trade Commission’s Consumer Sentinel Network report [PDF], that year there were more than 850,000 impostor scams, about a fifth of which resulted in monetary losses totaling $2.7 billion. While an unknown but presumably small portion of these involved AI voice cloning software, reports of misuse of the technology have become more common.

Last year, for example, police in Baltimore, Maryland, arrested the former athletic director of a high school for allegedly impersonating the school’s principal using voice cloning software to make it sound as if the principal had made racist, antisemitic remarks.

And the voice cloning report cites numerous testimonials from hundreds of consumers who told the publication about their experience with impersonation phone calls in response to a February 2024 solicitation.

The concern raised in Gedye’s report is that some of these companies specifically market their software for deception. “PlayHT, a voice cloning company, lists ‘pranks’ as a use case for its AI voice tools in a company blog post,” the report says. “Speechify, another AI voice company, also suggests prank phone calls as a use case for its tools. ‘There’s no better way to prank your friends than by pretending you’re someone else.'”

That concern is evident among some large commercial AI vendors. Microsoft, for example, has chosen not to publicly release its VALL-E 2 project, citing the risk of potential misuses “such as spoofing voice identification or impersonating a specific speaker.” Similarly, OpenAI has limited access to its Voice Engine for speech synthesis.

The US Federal Trade Commission last year finalized a rule that prohibits AI impersonation of governments and businesses. It subsequently proposed to extend that ban to prohibit the impersonation of individuals, but no further progress appears to have been made toward that end.

Given the current US administration’s efforts to eliminate regulatory bodies like the Consumer Financial Protection Bureau, Gedye said state-level regulation might be more likely than further federal intervention.

“We’ve seen a lot of interest from states on working on the issue of AI specifically,” said Gedye. “Most of what I do is work on state-level AI policy and there’s a bunch of ambitious legislators who want to work on this issue. I think [state Attorneys General] are also interested in protecting their constituents from the harms of emerging technology. Maybe you have challenges at the federal level right now for consumer protection, although I hope that scams and impersonation are particularly nonpartisan issues.” ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.