These scam ads lure you with promises of advanced AI chatbot features, but instead, they steal your personal information.
A security firm has uncovered a new scheme where threat actors are using sponsored social media posts and ads to promote fake enhanced versions of AI tools like ChatGPT and Bard. These posts, which appear on platforms like Facebook, entice users to download malware disguised as these AI applications.
When victims download the malware, it infiltrates their systems and steals sensitive information stored in their browsers, such as passwords and cryptocurrency wallets.
The rapid spread of this scam can be attributed to the sophisticated methods used by the threat actors to make their pages appear legitimate, coupled with unsuspecting users interacting with these sponsored ads. This combination has enabled the malicious content to proliferate swiftly across Facebook.
Dangerous Fak.e Chatbots Lurking
Cybersecurity firm Checkpoint has identified a group of cybercriminals posing as AI chatbots like ChatGPT, Jasper AI, Google’s Bard, and AI image generator MidJourney on Facebook.
These threat actors create Facebook pages that either organically reach users' news feeds through engagement or through sponsored ads.
Many posts on these fraudulent pages claim to offer fake enhancements and upgrades, such as GPT-5, Smart Bard, and Bard Version 2, none of which actually exist.
Checkpoint reports that most of these Facebook pages direct users to similar landing pages, urging them to download password-protected archive files purportedly related to generative AI engines.
Alarmingly Realistic Fake Facebook Pages Are Spreading
Checkpoint also highlighted the extensive efforts by malicious actors to avoid arousing suspicion among their targets.
For instance, one fake page for the AI image generator MidJourney boasts an impressive 1.2 million likes, creating a significant trust signal for unsuspecting users.
The cybercriminals have also shrewdly mixed legitimate links to MidJourney’s official websites with links to their own malicious landing pages.
.
Additionally, they use bot accounts to post positive comments on their fake page’s posts, further enhancing the illusion of legitimacy.
How to Spot and Protect Yourself from Scams
If you happened to click on one of these scams, as Checkpoint did during their investigation, you would have had infostealers like Doenerium or libb1.py downloaded onto your device, and your personal data would be sent to a Discord server immediately.
Despite the sophistication of this scam and the extensive measures taken by the perpetrators to avoid suspicion, there are still telltale signs that can help you recognize it. Noticing these signs can protect you from falling victim.
For instance, the promise of "enhanced" versions of chatbots should be a red flag, no matter how legitimate a page appears. A quick Google search will reveal that such tools do not exist.
It's also important to remember that certain elements do not confirm a website's legitimacy. Links to genuine companies can be included on any site, and bot accounts can flood a page with positive comments, which doesn't necessarily mean it's trustworthy.
If you want to use an AI chatbot like Bard or ChatGPT, your safest approach is to search for the official website on Google rather than clicking links from social media or messaging apps like WhatsApp, which are also prone to scams.
Given the popularity of AI, scammers are quick to exploit it. Therefore, treat any offers or opportunities that seem to capitalize on the AI trend with caution.