Microsoft Calls for Immediate Legislation to Tackle AI Deepfake Threats

Microsoft is calling for urgent action from lawmakers to address the growing threat of AI-generated deepfakes used for fraud and abuse. The company's Vice Chair and President, Brad Smith, has stressed the need for legislation to protect the public, especially vulnerable groups like senior citizens and children.

Microsoft Calls for Immediate Legislation to Tackle AI Deepfake Threats

The Need for New Laws

In a recent blog post, Smith highlighted that while the tech industry and non-profits have taken steps to mitigate deepfake issues, existing laws are insufficient. He urged Congress to create a “deepfake fraud statute” to facilitate prosecution of such crimes. Although some states like California, Colorado, and Virginia have established regulatory frameworks for AI, federal legislation is still lacking.

Smith emphasized that the focus so far has been too narrow, mainly targeting the political misuse of deepfakes. However, he argues that the broader impact of deepfakes on various types of crime and abuse also needs attention.


Tools and Weapons

Smith's blog coincides with the release of a 42-page report that reiterates his 2019 book, "Tools and Weapons," warning about the dual nature of technological innovation. The report underscores the transformative impact of AI across different sectors, including medicine, while also highlighting the dangers it poses, such as the recent dismantling of a Russian bot farm by the FBI, which was spreading AI-generated disinformation.

Smith cautions that AI has made it significantly easier to create deceptive media, such as voice clones of family members, deepfake images of political candidates, and falsified government documents. He asserts that AI has quickly become both a valuable tool and a dangerous weapon.


A Call to Action

The new report is a call to action, urging US lawmakers to pass comprehensive legislation to combat deepfake fraud. Smith acknowledges the efforts of Congress and its collaboration with tech giants and action groups, but insists on the need for a dedicated legal framework to prosecute AI-generated scams effectively.

Smith points out that the focus should be on preventing election interference, protecting the elderly from fraud, and safeguarding women and children from online exploitation. With the upcoming election, Smith stresses the importance of swift action from both lawmakers and the tech industry, warning that the real danger lies in moving too slowly or not at all.

In conclusion, Smith’s message is clear: the rapid evolution of AI demands an equally swift legislative response to protect the public from the emerging threats posed by deepfakes.