It’s Still Too Soon to Fully Trust AI Browsers, “Scamlexity” Test Warns

Not long after we explored Comet’s new AI-powered browsing abilities, a report from Guardio Labs shook the tech world — revealing that AI browsers might not be as safe as we thought.

The researchers showed how these intelligent browsers could be tricked into carrying out online scams — from making fake purchases to handing over private login details to phishing sites — all without the user realizing anything was wrong.

The investigation, led by cybersecurity experts Nati Tal and Shaked Chen, put Perplexity’s Comet browser through a series of clever attack tests.

Unlike traditional browsers that depend on a person’s instinct and judgment, AI browsers tend to trust too easily. They follow instructions blindly, without that little voice in your head that says, “Wait, this doesn’t look right.”

It’s a reminder that while AI feels smart, it doesn’t yet think like a human — and that can be dangerous.

Let’s take a closer look at the biggest risks these AI browsers face, and what could help make them safer.

It’s Still Too Soon to Fully Trust AI Browsers, “Scamlexity” Test Warns

Key Takeaways

  • AI browsers like Comet can be fooled into finishing online scams such as fake purchases and phishing logins.
  • Guardio Labs found that AI systems don’t have the same natural caution or “gut feeling” that helps humans spot danger online.
  • Prompt-injection attacks can hide harmful instructions in web pages, giving hackers quiet control over the browser.
  • Current safety systems, like Google Safe Browsing, aren’t strong enough to protect AI-driven browsing.
  • Experts suggest adding built-in safety features such as phishing filters, spoofing alerts, and smart anomaly detection to stop these risks early.

When Smarter Tech Makes Old Scams Even Worse

A new study by Guardio Labs has revealed something pretty alarming — even the latest AI-powered browsers, built to “think” and act on their own, are still tripping over basic online security traps that have existed for decades.

To test this, researchers decided to run a little experiment. They built a fake Walmart website using the “Lovable” platform — it looked almost real but had plenty of obvious red flags. Then they told an AI browser assistant called Comet to “buy an Apple Watch.”

What happened next was both fascinating and worrying. Comet started browsing the web on its own, clicking through pages, scanning buttons, and finding the product. But despite clear hints that the site was a scam — things a human would spot right away — Comet didn’t notice a thing. It happily added the watch to the cart, and without asking anyone, autofilled saved addresses and credit card details from the browser’s memory to complete the fake purchase.

The researchers noted that Comet sometimes showed hesitation — occasionally refusing to buy or asking a human to step in — but it wasn’t consistent.

They explained:

“We ran this test several times. Sometimes, Comet sensed something off and refused to continue. Other times, it just asked the user to complete the checkout manually.”


Terrible at Spotting Phishing Tricks

If you thought that was concerning, there’s more. These AI browsers can read your emails, manage tasks, and even reply for you — but when it comes to spotting phishing scams, they’re still painfully naive.

In another test, the team sent a fake email from a brand-new ProtonMail account pretending to be a Wells Fargo investment manager. Inside was a link to a real phishing website — one that hadn’t yet been flagged by Google’s Safe Browsing system.

Comet treated the email as completely legitimate. It added the “important” message to its to-do list, clicked the fake link, and even guided the user toward entering their banking login info on a fraudulent page.

Screenshot showing the phishing email and fake Wells Fargo page.
Screenshot showing the phishing email and fake Wells Fargo page. Source: Guardio Lab

It didn’t stop to question or verify anything — it just trusted the message blindly.

The researchers believe this flaw comes from a deeper issue — AI lacks human intuition. Machines don’t have that little voice in the back of their heads that says, “Hmm, something feels off.”

They summed it up perfectly:

“When you remove human intuition from the loop, AI becomes the sole decision-maker. And without solid guardrails, those decisions become a coin toss. When your security depends on luck — it’s only a matter of time before things go wrong.”


Prompt Injection Creates New Threats

AI researchers have taken a deep dive into a worrying new kind of cyber trick — one that targets not just humans, but the AI systems we use every day. Beyond the usual “prompt injection” scams we’ve heard about, they’ve discovered something more advanced, something they’re calling “PromptFix.”

This attack, they explained, is like the next-generation version of the classic ClickFix scam — a sneaky social engineering trick that once fooled humans by mimicking CAPTCHA pages. But now, it’s evolved to deceive AI assistants themselves. “It’s our AI-era version of the ClickFix scam,” the researchers said. “The same old trick, but now it fools your AI instead of you.”

So how does it work?
The attackers hide invisible text boxes inside normal-looking webpages using simple CSS styling. These boxes are completely invisible to the human eye — you’d never notice them. But your AI assistant can. When it browses a page for you, it unknowingly reads these hidden boxes and treats the malicious commands inside as legitimate instructions.

In their test, the team created a fake medical results page that looked totally normal — even safe. There was just a simple checkbox that looked like a regular CAPTCHA. But behind the scenes, hidden deep in the code, was a set of secret instructions written for the AI. The human saw nothing suspicious. The AI, however, read the hidden text and acted on it.

Screenshot of the PromptFix attack showing the hidden prompt injection.
Screenshot of the PromptFix attack showing the hidden prompt injection. Source: Guardio Lab

Because AI assistants are designed to be helpful and efficient, the trap worked perfectly. The AI thought it had encountered a special “AI-friendly CAPTCHA” — something it could handle automatically for the user. So, instead of asking for human help like it should, it simply followed the hidden command and clicked the malicious button.

In their demo, the click only triggered a harmless file download — but it could just as easily have been a malicious file, quietly installing malware on the user’s device without anyone noticing.

The real danger runs much deeper.
If this trick works once, it could let attackers take over the AI browser entirely. As the researchers warned, the same hidden-prompt method could be used to make the AI send emails with personal data, share private files, or perform any action it’s allowed to do.
In other words — if the hacker controls your AI, they practically control you.


The Bottom Line

The researchers stress one crucial message: AI browsers must be built with security at their core, not as an afterthought. Right now, many are designed to prioritize smooth user experience over deep protection — and that needs to change.

They also pointed out that most AI browsers still rely on traditional tools like Google Safe Browsing to detect threats. But these tools were built for humans, not AI-driven interactions, and they simply don’t go far enough.

Their recommendation?
Build AI browsers with the same kind of safety systems we already trust in human browsing — just smarter and integrated directly into the AI’s reasoning process.

That means:

  • Phishing detection that works inside the AI’s logic,
  • URL reputation checks before the AI visits a page,
  • Domain spoofing alerts tailored for AI,
  • Malicious file scanning before downloads, and
  • Behavior tracking that spots when the AI starts acting differently than expected.

The message is clear:
If AI is going to browse the internet for us, it needs to be just as cautious — maybe even more so — than we are.


References:—

  1. “Scamlexity” We Put Agentic AI Browsers to the Test – They Clicked, They Paid, They Failed (Guardio)