Researchers Reveal How Copilot and Grok Could Be Misused as Secret Malware Channels

Cybersecurity researchers have uncovered something unsettling: popular AI assistants like Microsoft Copilot and xAI’s Grok can be misused as hidden communication channels for malware. What’s meant to help people work faster and smarter could, in the wrong hands, quietly help attackers blend into normal company traffic and slip past security systems without raising alarms.

Researchers Reveal How Copilot and Grok Could Be Misused as Secret Malware Channels


The technique, demonstrated by researchers at Check Point, has been named “AI as a C2 proxy.” At first glance, it sounds technical — but the idea behind it is surprisingly simple and a bit alarming.

According to Check Point, the attack takes advantage of AI tools that can browse the web or fetch URLs. By combining anonymous web access with carefully crafted prompts, attackers can trick these AI assistants into acting like messengers. Even more concerning, the same setup could allow malware to use AI to plan its next moves — gathering information, writing scripts, and deciding what to do next during an attack. It’s almost like giving malware a thinking partner.

This discovery marks another shift in how cybercriminals may exploit AI. It’s no longer just about using AI to write phishing emails faster or generate malicious code. Now, attackers can use APIs to create code on the fly, adjusting their tactics in real time based on what they find inside a compromised system. It’s adaptive, flexible, and harder to detect — which makes it particularly worrying.

AI tools have already become powerful helpers for threat actors. They can assist with reconnaissance, scanning for vulnerabilities, crafting convincing scams, creating fake identities, debugging code, or even building malware. But using AI as a command-and-control (C2) proxy takes things to another level.

In this scenario, Grok or Copilot’s browsing features are used to access attacker-controlled websites. The AI retrieves instructions hidden at those locations and sends the responses back through its normal interface. This creates a two-way communication channel — one that allows attackers to send commands into an infected system and quietly receive stolen data in return.

What makes this especially troubling is that it doesn’t require an API key or even a registered account. That means traditional defenses like revoking keys or suspending accounts simply won’t work. For security teams, that realization can feel frustrating and unsettling.

In many ways, this tactic isn’t entirely new. Attackers have long abused trusted platforms to distribute malware or manage C2 operations — a strategy often called “living-off-trusted-sites” (LOTS). The difference now is that AI platforms, which many organizations trust and rely on daily, could become part of that chain.

There’s an important detail, though: for this method to work, attackers must first compromise a system through other means. Malware has to already be installed. Once inside, it can use specially designed prompts to make the AI assistant contact attacker-controlled infrastructure. The AI retrieves instructions, passes them back to the infected machine, and the malware executes them. It’s clever — and a little chilling in how it hides within normal-looking activity.

Check Point researchers also warned that attackers could go even further. Instead of just relaying commands, they could use AI to analyze the infected system, decide whether it’s valuable, and develop strategies to avoid detection. In other words, the AI could act like an external decision-making engine, helping automate targeting and operational choices in real time.

As AI services become more deeply integrated into everyday business tools, the possibility of them being used as stealth transport layers grows. The same interface that answers questions and summarizes documents could, in theory, guide malware operations behind the scenes. It’s a reminder that powerful tools can be double-edged swords.

This disclosure follows another recent finding from Palo Alto Networks’ Unit 42 team. They demonstrated how a harmless-looking web page could be transformed into a phishing site by using client-side API calls to trusted large language model (LLM) services. The malicious JavaScript code is generated dynamically in real time, making it harder to detect.

The technique resembles what’s known as Last Mile Reassembly (LMR) attacks. In those cases, malware is smuggled through overlooked channels like WebRTC or WebSocket and then assembled directly in the victim’s browser — effectively sidestepping traditional security controls.

Unit 42 researchers explained that attackers could design prompts to bypass AI safety protections, tricking the model into producing harmful code snippets. These snippets are returned via the LLM service’s API, assembled in the browser, and executed instantly — turning what seemed like a normal web page into a fully functioning phishing site.

For many in the cybersecurity world, these discoveries bring mixed emotions. There’s admiration for the technical creativity involved — but also real concern. As AI continues to evolve, so do the ways it can be misused. The challenge now is ensuring that innovation doesn’t outpace protection, and that the very tools designed to help us don’t quietly become gateways for harm.