You’ve just got home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, child, or childhood friend who is asking you to send them money immediately.
You ask them questions and try to understand. Something is wrong with their answers, which are either vague or atypical, and sometimes there’s an odd lag, almost like they’re thinking a little too slowly. However, you are sure that your loved one is definitely speaking: that is his voice you hear and the caller ID shows his number. Pointing out the oddity of their panic, you dutifully send the money to the bank account they provide you with.
The next day you call them back to make sure everything is ok. Your loved one has no idea what you are talking about. That’s because they never called you – they were tricked by the technology: an AI Vote deepfake. Thousands of people were Cheated in this way in 2022.
The ability to clone a person’s voice is increasingly becoming within the reach of anyone with a computer.
As computer security researcherwe see that ongoing advances in deep learning algorithms, audio processing and engineering, and synthetic speech generation have made this increasingly possible convincingly simulate a person’s voice.
Worse, chatbots like ChatGPT are starting to generate realistic scripts with real-time adaptive responses. From Combining these technologies with speech generationa deepfake transforms from a static recording into a living, lifelike avatar capable of convincing phone calls.
Cloning a voice with AI
Creating a convincing high-quality deepfake, whether video or audio, is not the easiest thing. It requires a wealth of artistic and technical skill, powerful hardware, and a fairly powerful sample of the target voice.
There is a growing number of service offerings Produce medium to high quality speech clones for a feeand some language deepfake tools require a sample of only for a minuteor just a few secondsto produce a voice clone that might be convincing enough to fool someone. However, to convince a loved one – for example to use them in an identity scam – would likely require a much larger sample.
Researchers have managed to clone voices with just five seconds of recording time.
Protection against deepfake fraud and disinformation
After everything we’ve said DeFake project from the Rochester Institute of Technology, the University of Mississippi and Michigan State University and other researchers are working hard to detect video and audio deepfakes and limit the damage they cause. There are also simple, everyday steps you can take to protect yourself.
for starters, voice phishing, or “vishing”, scams like the one above are the most likely language deepfakes you may encounter in everyday life, both at work and at home. In 2019, a Energy company was scammed out of $243,000 when criminals simulated the voice of the parent company boss to instruct an employee to transfer money to a supplier. In 2022 the people were cheated out of an estimated $11 million through simulated voices, even from close, personal connections.
What can you do against AI fake votes??
Watch out for unexpected calls, even from people you know well. That doesn’t mean you have to plan every call, but it helps to at least email or text in advance. Also, don’t rely on caller ID, since you can fake that too. For example, if you get a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the legitimacy of the call. Be sure to use the number you wrote down, saved in your contacts list, or found on Google.
Also, be careful with your personal identification information like your social security number, home address, date of birth, phone number, middle name, and even the names of your children and pets. Scammers can use this information to impersonate you at banks, real estate agents, and others, enriching themselves while bankrupting you or destroying your credit.
Here’s another piece of advice: know yourself. In particular, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protecting yourself from manipulation. Scammers usually try to figure out your financial fears, your political affiliations or other inclinations, whatever they may be, and then take advantage of them.
This vigilance is also a good defense against disinformation from language deepfakes. Deepfakes can be used to take advantage of you confirmation biasor what you want to believe about someone.
If you hear someone important, whether in your community or in government, say something that either seems very out of character or confirms your worst suspicions, you should be cautious.
Want to learn more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides The best free AI art generators And Everything we know about OpenAI’s ChatGPT.
Matthew Wrightprofessor of computer security, Rochester Institute of Technology And Christopher SchwartzPostdoctoral Research Associate of Computing Security, Rochester Institute of Technology
This article is republished by The conversation under a Creative Commons license. read this original article.
https://gizmodo.com/ai-deepfake-voice-how-to-avoid-spam-phone-calls-1850245346 AI Deepfakes of Your Voice Will Call. Here’s How to Avoid Them.