AI Voice Simulator Easily Abused to Deepfake Celebrities

An image of a computer generated human face and an audio waveform in front of it.

AI speech synthesis is becoming more sophisticated, which also means it’s more vulnerable to abuse.
graphic: ArtemisDiana (Shutterstock)

Who could have seen this coming? AI image generators were used to create it non-consensual pornography by celebrities, but why should the same user base dare abuse a free AI text-to-speech deepfake speech generator?

ElevenLabs from UK advertised first its Prime Voice AI earlier this month, but just a few weeks later it may have to reconsider its entire model after users reported that a number of users were creating hateful messages using real voices. The company released its first open text-to-voice beta system on January 23. The developers promised that the voices would match the style and cadence of a real human. The company’s “Voice Lab” feature allows users to clone voices from small audio samples.

motherboard first reported Monday about a number of 4Chan users uploading deepfake voices of celebrities or internet personalities, from Joe Rogan to Robin Williams. A 4Chan user reportedly posted a clip of Emma Watson reading a passage from Mein Kampf. Another user picked up a voice that reportedly sounded like Rick Sanchez from Justin Roiland Rick & Morty talking about how he would hit his wife, an obvious reference to it recent allegations of domestic violence against the co-creator of the series.

In a 4Chan thread reviewed by Gizmodo, users posted clips of the AI ​​spreading intense misogyny or transphobia using voices of characters or narrators from various anime or video games. All of this means it’s exactly what you’d expect from the armpit of the internet once it gets your hands on it easy to use deepfake technology.

On Monday, the company tweeted that it had a “crazy weekend,” noting that developers were seeing an “increasing number of voice cloning abuses” while their technology was “overwhelmingly used for positive causes.” but did not name a specific platform or platforms on which the abuse occurs took place.

ElevenLabs offered some mitigation ideas, including introducing account verification, which could include an upfront payment, or even discontinuing the free version of Voice Lab altogether, which would then mean every cloning request would have to be manually verified.

Last week, ElevenLabs announced it received $2 million in pre-seed funding led by Credo Ventures based in the Czech Republic. The small AI company that could planned to expand its operations and use the system in other languages, according to its pitch deck. This admitted abuse is a twist for developers who were very optimistic about where the technology could go. The company has spent the past week promoting its technology and saying so system could reproduce Polish TV personalities. The company further promoted the idea of put human Audio book teller unemployed. The beta site speaks hNow the system could automate audio for news articles or even create audio for video games.

The system can certainly mimic voices fairly well, just based on the voices verified by Gizmodo. A layman might not be able to tell the difference between a fake clip of Rogan talking about porn habits and one current clip from his podcast. Still, the sound definitely has a robotic quality that becomes more apparent with longer clips.

Gizmodo reached out to ElevenLabs via Twitter for comment, but we didn’t immediately receive a response. We’ll update this story when we hear more.

There are many other companies that offer their own text to voice tool, but so long Microsoft’s similar VALL-E system is yet to be releasedother smaller companies have been far less reluctant to open the door to abuse. AI experts have told Gizmodo This urge to get things out the door without an ethics check will continue to cause problems like this.

https://gizmodo.com/ai-joe-rogan-4chan-deepfake-elevenlabs-1850050482 AI Voice Simulator Easily Abused to Deepfake Celebrities

Zack Zwiezen

USTimesPost.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@ustimespost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button