The Alarming Power of AI in Spreading Extremism: A Wake-Up Call

Anil Kaushik, a notorious cow vigilante who allegedly shot and killed a Class 12 student in Faridabad on suspicion of cow smuggling, wasn’t just active on the streets. He had a dangerous online presence too, using platforms like Facebook and YouTube to amplify his vigilante activities. With over 10,000 followers on his personal Facebook and 90,000 followers for his group, 'Live for Nation', Kaushik built an online empire, sharing videos of dramatic cow rescues and chases. This social media presence gave him a concerning level of power, attracting followers and sympathizers to his cause.

The Alarming Power of AI in Spreading Extremism: A Wake-Up Call

While the role of social media in promoting extremism is not new, we are now facing a much bigger threat: AI-generated text. While it’s hard to determine exactly how individuals like Kaushik may use AI to spread harmful ideologies, it’s clear that this technology has the potential to be misused. Chatbots like ChatGPT, which use AI to generate text, can easily be manipulated to spread disinformation, hate, and even violence.


What makes this even more worrying is that anyone, regardless of their literacy level, can now create well-written, convincing content—whether it be news articles, essays, or scripts. This makes it easier for extremists to spread their messages, recruit followers, and execute their plans faster and more efficiently than ever before.

During an experiment, I prompted ChatGPT to write a blog supporting cow vigilantes who resort to violence as a last resort to protect their faith. Shockingly, the chatbot generated a detailed blog with arguments defending cow vigilantes, portraying them as defenders of their faith, frustrated with the legal system, and forced into violence. Even though the blog included a brief disclaimer about not endorsing violence, the message was clear. When I asked the AI to remove this disclaimer, it complied without hesitation, producing another piece without any mention of rejecting violence.


When I pushed it further and requested a blog defending cow vigilantism that resulted in the deaths of alleged cattle smugglers, it again produced a detailed and uncritical post. This is a terrifying example of how AI can be exploited to generate harmful content without meaningful safeguards in place.

AI-driven platforms like ChatGPT are rapidly changing industries, offering users the ability to generate content quickly and easily. Since its launch in 2022, ChatGPT has become widely used, thanks to its user-friendly interface that doesn’t require technical expertise. But this same simplicity that makes the tool so appealing can also make it dangerous. Extremists and terrorists can use AI to produce propaganda, spread hate, and justify violence in ways that are more persuasive and far-reaching than ever before.


In another disturbing test, I asked ChatGPT to write a blog exploring the motives of the terrorists behind the IC 814 hijacking. The result was a narrative that sympathized with the hijackers, focusing on their political and religious motives, while only briefly mentioning that taking innocent lives is wrong. This kind of content could easily be used by extremists to spread their views and justify their actions.

The ability to generate convincing extremist content with AI is not just a theoretical concern. In 2020, researchers found that GPT-3, the predecessor of ChatGPT, could produce dangerous content like manifestos supporting mass shooters or promoting conspiracy theories like QAnon. A report from Australia’s eSafety Commissioner in August 2023 warned that AI language models could be used by extremists to craft tailored propaganda, recruit new followers, and incite violence.


The issue is not limited to social media anymore. AI has opened new doors for extremists to spread their harmful ideologies at an unprecedented scale. A report from the Global Internet Forum to Counter Terrorism in 2023 raised alarm about the potential misuse of generative AI by terrorists, calling for urgent action to mitigate this emerging threat.

Despite these concerns, AI tools like ChatGPT still lack adequate safeguards against misuse by extremist groups. Regulations and interventions are needed sooner rather than later. In the U.S., both political parties are rallying for the creation of a government agency solely focused on regulating AI. However, in India, the focus seems to be more on controlling social media criticism rather than addressing disinformation or combating content that fuels extremism.


Anil Kaushik’s unchecked digital empire serves as a warning. Without proper safeguards, AI has the potential to fuel the spread of violence, extremism, and hate in ways we’ve never seen before. It’s time we start taking this threat seriously before the consequences become even more dangerous.