Elon Musk’s AI tool under fire for making explicit Taylor Swift videos

A storm is brewing around Elon Musk’s AI video generator after accusations that it created explicit videos of Taylor Swift—without anyone even asking for it.

Clare McGlynn, a law professor who fights against online abuse, didn’t mince words: “This isn’t some harmless glitch. This is intentional.” McGlynn has worked on laws to make pornographic deepfakes illegal, and she says this is a clear example of technology crossing a dangerous line.

Elon Musk’s AI tool under fire for making explicit Taylor Swift videos


According to The Verge, the AI—part of Grok Imagine’s new “spicy” mode—jumped straight to producing fully uncensored topless videos of the singer, even when there was no prompt for sexual content. On top of that, the system apparently didn’t use proper age verification checks, something that’s been legally required since July.

XAI, the company behind Grok, hasn’t yet responded to requests for comment. This is despite the fact that their own rules forbid creating pornographic likenesses of real people.

McGlynn argues the problem runs deeper: “When an AI makes sexualized content without being asked, it’s showing the misogynistic bias built into much of this technology. Companies like X could stop this, but they’re choosing not to.”

Elon Musk’s AI tool under fire for making explicit Taylor Swift videos


Sadly, this isn’t the first time Taylor Swift’s image has been misused. Back in January 2024, sexually explicit deepfake videos of her spread like wildfire across X and Telegram, racking up millions of views.

For those unfamiliar, deepfakes are AI-made images or videos where one person’s face is swapped onto another’s body. They can be used for fun or art—but when weaponized against someone, they can destroy reputations and cause immense emotional harm.

Elon Musk’s AI tool under fire for making explicit Taylor Swift videos


"Totally unfiltered, completely raw"

While experimenting with the limits of Grok Imagine, The Verge journalist Jess Weatherbed decided to try a lighthearted prompt: “Taylor Swift celebrating Coachella with the boys.”

At first, Grok gave her still images — Taylor in a dress, smiling, with a group of men standing behind her. Harmless enough.

But then came the options to animate the images into short video clips, with four different settings: normal, fun, custom, or spicy. Out of curiosity, Jess clicked on spicy.

What she saw next stunned her.

“The dress was gone in an instant,” she told BBC News, still sounding a bit unsettled. “All that was left was a tasselled thong, and she started dancing — no censorship, nothing hidden. It was… shocking how quickly it escalated. I never asked it to take off her clothes. All I did was pick the spicy setting.”

Gizmodo later reported seeing similar explicit results when testing the tool with famous women. In some cases, the videos came back blurred or with a “video moderated” warning, but the risk was clearly there.

The BBC couldn’t confirm the exact results Jess described, but she shared more about the test. She had paid £30 for a subscription to Grok Imagine, using a brand-new Apple account. The platform only asked for her date of birth — there was no other age check.

That’s a problem under new UK laws introduced at the end of July. Platforms that show sexual images are now required to verify users’ ages using methods that are “technically accurate, robust, reliable and fair.”

A spokesperson from Ofcom, the UK’s media regulator, told the BBC:

“Sites and apps that include Generative AI tools capable of producing pornographic content are regulated under the Act. We’re aware of the growing risks these tools present, especially to children, and we’re working to make sure platforms have proper safeguards in place.”


New UK Laws on Deepfake Porn

Right now in the UK, creating pornographic deepfakes is already illegal if they’re used for revenge porn or if they involve children. But the law doesn’t yet cover all situations where someone’s image is used without consent.

Elon Musk’s AI tool under fire for making explicit Taylor Swift videos


That’s about to change. Professor Clare McGlynn, who has long campaigned against image-based abuse, helped shape an important amendment that would make it illegal to create or even request any non-consensual pornographic deepfake—full stop. The government has promised to bring this change into law, but it hasn’t officially started yet.

Baroness Owen, who pushed the amendment in the House of Lords, put it bluntly:

“Every woman should have the right to choose who owns intimate images of her.”

She stressed that consent should be the rule whether the woman is a Hollywood celebrity or someone you’ve never heard of. In her view, waiting any longer to act just means more people will get hurt:

“This case shows exactly why the Government must not delay any further.”

A spokesperson for the Ministry of Justice agreed, calling sexually explicit deepfakes made without consent “degrading and harmful”. They said the government refuses to tolerate this kind of abuse against women and girls, and that’s why they’ve moved to ban it as quickly as possible.

The urgency became even clearer in early 2024, when pornographic deepfakes of Taylor Swift suddenly went viral. The situation got so bad that X (formerly Twitter) temporarily blocked searches for her name, while rushing to delete the images and punish the accounts spreading them.

Months later, when tech reporters at The Verge tested a new AI image tool called Grok Imagine, they picked Taylor Swift as their example—assuming the system would have safeguards to protect her likeness.

“We thought, given what happened, she’d be the first person they’d protect,” said journalist Mia Weatherbed. “Turns out, we were wrong.”

Taylor Swift’s team hasn’t commented yet, but the incident has only fueled calls for stronger laws—not just in the UK, but also in the US—so that nobody, famous or not, has to see their face misused in such a violating way.