Study Reveals 10% of Companies Have Fallen Victim to Deepfake Attacks

One in ten business leaders surveyed reported that their company has experienced attempted deepfake fraud.



U.S. businesses are facing an increased risk of deepfake scams, with one in ten surveyed executives reporting that their companies have been targeted.

This concern is heightened by another statistic from the study, revealing that over half of these business leaders said their employees lack training in identifying or preventing deepfake attacks.

These findings add to the growing list of AI-related issues, fueling skepticism about the accelerating use of AI in business.

Study Reveals 10% of Companies Have Fallen Victim to Deepfake Attacks


Businesses Increasingly Vulnerable to Deepfake Scams

Over 10% of companies have "faced successful or attempted deepfake fraud," according to a study by Business.com, which surveyed 244 CEOs, C-suite executives, presidents, and vice presidents.

Only 31% of respondents believe that deepfakes haven't increased their exposure to potential fraud.

Despite these figures, 61% of business leaders noted that no protocols had been established at their companies to address the risks posed by deepfake technology, and less than half confirmed that their employees had received training to deal with these risks.

"Many companies are vulnerable to financial losses and reputation damage because they operate with outdated or weak cybersecurity measures. Too many executives admit their employees have not been trained to identify deepfake media," said Chad Brooks, Managing Editor of Business.com.

As a result, 32% of respondents have zero confidence in their staff's ability to recognize such fraud attempts. Additionally, a quarter of the executives themselves admitted to having "little to no familiarity with deepfake technology."




Understanding the Threat: How Deepfakes Target Businesses


While many notorious deepfake scandals have targeted individuals—such as fabricated pornography involving Scarlett Johansson and Taylor Swift, or false videos of UK politician Sir Keir Starmer berating his staff—the technology is increasingly being used to defraud businesses.

In a blog post, Business.com describes how deepfakes could be used in a customer relations department at a bank: “Armed with voice cloning technology, a fraudster could impersonate a valued customer by contacting the bank’s call center and authorizing fraudulent transactions.”

The post also cites a CNN report of a $25 million fraud in which a Hong Kong finance worker was deceived into handing over the sum to a scammer posing as the company’s CFO on a video call.

“AI programs can create manipulated videos, photos, or even audio with speed and sophistication, making it easier than ever for scammers to mislead customers or defraud employees,” said Chad Brooks, Managing Editor of Business.com.



Insufficient Training on Deepfake Threats


Despite being well-prepared for conventional cyber threats like hacking and phishing, many large corporations lag in training for deepfake risks.

A survey revealed that only 14% of companies have "fully implemented" measures to counter deepfakes.

According to a Business.com blog post, "80 percent of companies lack protocols for handling deepfake attacks. Without a plan, these companies are vulnerable and unprepared to address and mitigate such incidents to protect their business."