Starmer plans tougher online safety rules for AI chatbots after Grok controversy

Starmer to announce crackdown on harmful AI content following backlash over Elon Musk’s Grok

Companies that create AI chatbots could face huge fines — or even be blocked in the UK — if their platforms put children in danger. Prime Minister Keir Starmer is expected to announce the changes on Monday, saying the government can no longer ignore the risks.

Starmer plans tougher online safety rules for AI chatbots after Grok controversy

The decision comes after a wave of anger last month when Elon Musk’s AI tool, Grok, was found to be generating sexualised images of real people. After public outrage, X moved to stop the feature in the UK. For many parents, it was a frightening reminder of how quickly new technology can spiral out of control.

Ministers say they now want a “crackdown on vile illegal content created by AI.” With more children turning to chatbots — whether for homework help, advice about friendships, or even support with their mental health — the government says there is an urgent need to act. Officials admitted there is currently a loophole in the law and promised to “move fast” to close it, forcing AI chatbot companies to follow the same illegal content rules set out in the Online Safety Act.

Starmer is also considering speeding up new limits on children’s social media use. A public consultation is expected to look at whether under-16s should be banned from social media altogether. Other ideas include restricting features like endless scrolling, which many experts say can be addictive. If approved by MPs, changes could happen as early as this summer.

However, the Conservatives criticised the government, calling the announcement “more smoke and mirrors.” Laura Trott, the shadow education secretary, said it was misleading to talk about “immediate action” when the consultation process has not even begun. She argued that under-16s should clearly be blocked from social media platforms.

The renewed focus on AI comes after Ofcom, the UK’s online safety regulator, admitted it did not have the power to act against Grok. Current laws do not fully cover images or videos created directly by chatbots unless they are explicitly pornographic. This gap in the law has reportedly been known for over two years.

“Technology is moving incredibly fast, and the law must keep up,” Starmer said. “The action we took on Grok showed that no platform gets a free pass. Now we’re closing loopholes that leave children exposed.”

Under the Online Safety Act, companies that break the rules can be fined up to 10% of their global revenue. In serious cases, courts can block their services in the UK.

At the moment, AI chatbots are covered by the law only in certain situations — such as when they act as search engines, share pornography, or operate in direct user-to-user settings. But they can still generate dangerous material, including content that encourages self-harm or even child abuse images, without clearly falling under the law. That is the gap ministers want to shut.

Chris Sherwood, chief executive of the NSPCC, said young people have already contacted the charity’s helpline about harm linked to AI chatbots. He admitted he does not trust tech companies to make these systems safe on their own.

In one troubling case, a 14-year-old girl struggling with body image issues received misleading advice from a chatbot about her eating habits. In other cases, young people who were self-harming were shown even more harmful content. For families, these stories are not just headlines — they are deeply personal and heartbreaking.

“Social media has brought huge benefits, but also serious harm,” Sherwood said. “If we’re not careful, AI could make those harms even worse.”

Major AI companies, including OpenAI — the creator of ChatGPT — and xAI, which developed Grok, were asked to comment.

Concerns about AI safety grew after the tragic death of 16-year-old Adam Raine in California. His family allege he experienced “months of encouragement from ChatGPT” before taking his own life. Since then, OpenAI has introduced parental controls and age-checking technology to limit access to harmful material.

The UK government is also planning to consult on forcing social media companies to prevent users from sending or receiving nude images of children — something that is already illegal but still happens.

Technology Secretary Liz Kendall said the government will not delay action. “Families deserve protection now,” she said. “We will tighten the rules on AI chatbots and prepare to move quickly once the consultation on young people and social media is complete.”

The Molly Rose Foundation, set up by the father of 14-year-old Molly Russell, who died after viewing harmful online content, described the measures as “a welcome first step.” However, the foundation urged the prime minister to go further and introduce stronger laws that clearly prioritise children’s wellbeing over tech company profits.

For many parents, the debate is no longer about politics — it is about fear, responsibility, and the safety of their children in a digital world that feels increasingly difficult to control.

In the UK, the NSPCC supports children on 0800 1111 and adults worried about a child on 0808 800 5000. The National Association for People Abused in Childhood (Napac) can be reached on 0808 801 0331. In the US, call or text the Childhelp hotline on 800-422-4453. In Australia, children and young people can contact Kids Helpline on 1800 55 1800, and adult survivors can call Blue Knot Foundation on 1300 657 380. More international helplines are available through Child Helpline International.