Vishing & AI: The Phone Threat We Can’t Hear Coming

Imagine receiving a call from a friend or family member—could you distinguish between their voice and an AI imposter? While you might easily spot a scammy WhatsApp message, what if a loved one called, urgently asking for financial help in their own familiar voice? The rise of AI voice generator software has made it incredibly easy to replicate human voices. As AI intersects with the phenomenon of vishing (voice phishing), we face a daunting question: How do we defend against a threat we can't see?

Vishing & AI: The Phone Threat We Can’t Hear Coming

Key Takeaways

  • AI Voice Cloning: AI can replicate voices using just 15 seconds of audio from social media or other sources.
  • Misinformation: Fake audio can spread misinformation and fraudulent ads featuring celebrities.
  • Emotional Manipulation: Scammers use AI to mimic voices of loved ones, asking for money.
  • Pressure Tactics: Most scams involve creating a sense of urgency to prompt irrational decisions.

AI Voice Fraud: A Growing Threat

Online scams have evolved far beyond phishing emails and robocalls. AI voice cloning now allows scammers to impersonate the voices of those you trust most, making scams harder to detect. AI can clone voices from social media posts, phone messages, or leaked audio, creating highly believable impersonations. For instance, OpenAI’s Voice Engine needs only a 15-second audio sample to generate natural-sounding speech.

Recent high-profile incidents highlight the severity of this threat. In April 2024, BBC presenter Liz Bonnin’s voice was cloned without her consent for misleading advertisements. Scarlett Johansson also accused OpenAI of using her voice without permission. London Mayor Sadiq Khan experienced deepfake audio of him making inflammatory remarks, nearly causing serious disorder. Additionally, a political consultant in Louisiana was indicted for using a fake robocall that impersonated President Biden to influence voters.

These incidents show that not only celebrities and public figures are at risk—everyone is vulnerable to AI voice fraud.


The Real-Life Impact of AI Voice Scams

The movie "Thelma" raised awareness of AI voice fraud through a storyline where a 93-year-old grandmother loses $10,000 to a scam call. This reflects real incidents where scammers use voice cloning technology to impersonate loved ones, especially targeting older people.

It’s easy to assume you’re safe because you avoid unknown calls. However, when scammers have extensive information about you and your family, a message saying, “Mom, it’s me. Please pick up,” might be hard to ignore. Even knowledgeable individuals can be caught off guard. Financial columnist Charlotte Cowles, who writes for The New York Times, ended up giving $50,000 to a scammer despite her expertise.

How to Protect Yourself from AI Voice Cloning Scams

Currently, scammers can't make it seem like they're calling from a real person’s number, so treat calls from unknown numbers claiming to be a loved one as a red flag. Verify the caller’s story by contacting the person directly through another number or social media.

Security experts suggest families use unique passcodes or safe words to confirm identities. While this might be challenging, it's a useful strategy. Remember, most scams rely on creating a sense of urgency to push you into making hasty decisions. Stay calm, take a deep breath, and assess the situation before responding.


The Bottom Line

We are entering an era of digital deception where the voices of our loved ones can be weaponized against us. To navigate this new reality, we must not only strengthen our financial security but also rethink how we communicate and trust each other online. While technology can be overwhelming, our best defense lies in our human qualities: skepticism, patience, and the ability to pause and reflect.