Can Google’s AI Really Stop Scam Texts & Calls? Experts Aren’t So Sure
We’ve all heard the horror stories—or maybe even lived them. That sinking feeling when you realize the message or call you trusted was a scam. For every person who opens up about being tricked, there are countless others who stay silent, embarrassed, or still trying to figure out what went wrong.
According to a recent Bankrate survey, in just the last year, 1 in 3 American adults have faced some type of financial scam or fraud. And nearly 2 in 5 of them actually lost money. That’s not just statistics—that’s real people losing savings, sleep, and peace of mind.
We've been fighting scammers for years with everything from spam filters to number blockers. But let’s be honest—it often feels like we’re always one step behind. Scammers are crafty. They adapt fast, dodge systems with ease, and always seem to know how to disguise their schemes. Many of the tools we rely on either react too slowly or are just too blunt to catch the clever stuff. It’s frustrating—and exhausting.
Now, Google says it’s ready to change the game. Their new on-device AI scam detection feature, built right into Android phones, aims to spot sketchy calls and texts the moment they happen. It’s smart, fast, and works without sending your data to the cloud. Sounds promising, right?
Key Points:- Google is introducing a real-time scam detection tool powered by AI right on your Android device.
- This tool scans your incoming calls and texts using AI that runs locally on your phone, flagging anything suspicious before you even interact with it.
- Similar tech from Apple, Samsung, and apps like Truecaller haven’t had huge success, mostly because scammers evolve faster than the tech does.
- Experts say while on-device AI is a good start, it needs more context—like user behavior and scam trends—to truly stay ahead.
- To really protect users, AI tools need to be updated often and backed by strong monitoring. Scammers don’t rest—and our defenses shouldn’t either.
- Google is introducing a real-time scam detection tool powered by AI right on your Android device.
- This tool scans your incoming calls and texts using AI that runs locally on your phone, flagging anything suspicious before you even interact with it.
- Similar tech from Apple, Samsung, and apps like Truecaller haven’t had huge success, mostly because scammers evolve faster than the tech does.
- Experts say while on-device AI is a good start, it needs more context—like user behavior and scam trends—to truly stay ahead.
- To really protect users, AI tools need to be updated often and backed by strong monitoring. Scammers don’t rest—and our defenses shouldn’t either.
So, could this be the breakthrough we’ve been hoping for? Maybe. But tech alone won’t save us. It’s going to take smart tools, constant updates, and a whole lot of vigilance to finally outrun the scammers.
What Is Google’s AI Scam Detection?
Back in March 2025, Google quietly started testing a new AI-powered Scam Detection feature in its Messages and Phone apps. It’s all part of a bigger effort to protect users right on their phones—and help fight back against the growing mess of scam texts and shady calls that seem to hit us daily.
Just ahead of the big Android 16 release next week, Google announced on May 13th that they’ve made the scam detection tool even smarter. The goal? To catch more scams and help people feel safer when using their phones.
When this tool first rolled out, it was mostly focused on spotting package delivery tricks and fake job offers. But now, Google says it has taught the AI to recognize a much wider range of scams—things like crypto cons, fake toll road or billing fees, financial impersonations, sketchy tech support schemes, and more.
So how does it work?
Let’s say you get a suspicious text. Your Messages app quietly checks it using AI built directly into your phone. If something feels off, it throws up a warning. You can then choose what to do—report it, block the sender, or just ignore the alert. Simple, but helpful.
The same goes for calls. If a call includes a weird request—like turning off your security settings or installing some shady app—Google’s system can flag it. This only kicks in for unknown callers, and the best part? All of it happens on your device, so your private conversations stay private.
And Google’s not alone in this fight.
Others are stepping up too. In the UK, for example, O2 has a clever AI assistant called dAIsy. She actually wastes scammers’ time by keeping them talking, while O2’s Call Defence system uses AI to check if a call seems fishy or not.
Over in the U.S., big players like AT&T, T-Mobile, and Verizon are all using AI to filter scam calls as well—trying to stop trouble before it even rings your phone.
All in all, it’s comforting to see tech companies finally putting real effort into shielding us from these daily annoyances and dangers. Because honestly, we’ve got enough to worry about without falling for the latest scam.
Can Google’s New AI Tool Really Stop Scammers on Android?
The idea of using AI to catch scams isn’t exactly breaking news. Over the past few years, phone companies and cybersecurity firms have all tried their hand at this—hoping to slow down the growing wave of digital scams that seem to get smarter every day.
Just this April, Samsung renewed its partnership with Hiya, promising to keep boosting its Smart Call feature with Hiya’s Adaptive AI through 2028.
Apple, on the other hand, has its own system that quietly sorts unknown text senders using machine learning.
And apps like Truecaller and Norton have also jumped on the AI bandwagon to sniff out shady behavior.
But even with all these tools, scam calls and texts still find their way in.
So when Google announced its new Scam Detection tool for Android, it sounded promising—maybe even hopeful. But it also left us wondering: is this the real breakthrough we’ve been waiting for, or just another layer in a long line of filters that let bad actors slip through?
To get some clarity, we asked a few experts in AI and cybersecurity to weigh in on what Google’s new system really means—and how far it can go.
Gaurav Tendolkar, a Senior Data Scientist at Microsoft, shared something deeply personal: he actually fell for a scam call last year. That experience left a mark. So when he heard about Google’s new effort, he called it “a great step in terms of security.”
Still, he was honest. There’s no such thing as perfection, not even in AI. “Even with today’s AI sophistication,” he said, “it is impossible to get a 100% correct scam detection paired with 0% false alarms.”
In other words, mistakes will still happen—some threats will be missed, and some harmless messages may be wrongly flagged.
Tom Tovar, CEO at Appdome, a mobile security platform, also had mixed feelings. He said device-based detection is definitely better than relying on cloud systems, but it’s still only part of the puzzle. Why? Because it doesn’t see the bigger picture.
“Effective fraud prevention,” he explained, “needs mobile brands to build AI-powered defenses directly into the heart of their apps—not just plug something in after the fact.” That means tracking user behavior in real time, understanding patterns, and reacting as things happen—not hours later.
According to Tovar, this lack of deep integration is exactly why earlier AI scam tools haven’t lived up to their promises. “Too many of these tools are bolt-ons,” he said—add-ons, really, like widgets or cloud APIs. “They often miss the bigger signals—like subtle impersonation tactics or the constantly changing tricks of today’s scammers.”
In the end, Google’s tool might be a strong step in the right direction. But as these experts make clear, beating modern scams will take more than AI. It’ll take deep, smart systems—designed with the whole threat landscape in mind.
The Bottom Line
Even though companies are pouring more money into AI to catch fraud, the truth is—we can’t fully trust it just yet. Scammers are fast, clever, and always switching things up. They tweak their tricks, their wording, even their formats—making it tough for AI systems to keep up. What works well today might completely miss the mark tomorrow.
Ken Jon Miyachi, co-founder and CEO of Bitmind, reminds us that AI doesn’t think like we do—it works on probabilities, not certainty.
“AI-based fraud detection isn’t foolproof,” he explains. “It can sometimes miss the mark or sound a false alarm, especially when it comes across unfamiliar scams. That kind of thing can really frustrate users and let subtle frauds slip through.”
Miyachi still sees a lot of promise in AI—especially if it's constantly updated and carefully watched. But he’s clear: leaning on AI alone could backfire. It could shake people’s trust and leave systems vulnerable right when they need to be strongest.
- Survey: 1 in 3 Americans have faced a financial scam or fraud in the past year (Bankrate)
- Google Online Security Blog: What’s New in Android Security and Privacy in 2025 (Security.googleblog)
- AI Scam Adviser – Meet dAIsy, the scam-fighting AI bot (O2.co)
- Hiya and Samsung Extend Strategic Partnership Through 2028 (Businesswire)