Shadow AI is Becoming a Serious Risk in Healthcare and DevSecOps
The rise of Shadow AI—unapproved use of AI tools by employees—has created growing concerns across various industries. Since the launch of generative AI, workers have found it tempting to use tools like ChatGPT to speed up tasks, but many don’t stop to think about the risks involved. By using these AI apps without approval, they may unknowingly share confidential data or compromise security.
As AI spreads beyond the tech world and into industries like healthcare and software development, the risks of Shadow AI become even more severe. The story behind these numbers reveals how dangerous the careless use of AI can be.
The Shocking Numbers Behind Shadow AI
A recent survey from the U.S. National Cybersecurity Alliance (NCA) found that 38% of employees are using AI without their employer's knowledge. Even worse, many of these employees are sharing sensitive information with these unapproved tools. What’s even more alarming? More than half of the workers (52%) admitted that they haven’t received any training on how to use AI securely.
Younger workers seem to be leading this trend—Gen Z (46%) and Millennials (43%) are heavier users of Shadow AI compared to older generations. And this isn’t just a minor issue. The risks go beyond data leaks and security breaches. In industries like healthcare, a small mistake can cause real harm, and in software development, it can lead to massive vulnerabilities that threaten millions.
Shadow AI in Healthcare: A Growing Concern
The situation in healthcare is particularly alarming. Dr. Danielle Kelvas, a U.S.-trained physician, spoke with Techopedia about the urgent need for AI policies to be enforced across the healthcare industry. “It’s really tempting to ask AI for help with medical questions, but we can’t do that at the cost of exposing patients’ protected health information (PHI),” she explained.
A report from The New York Times highlighted how more than 15,000 doctors from 150 healthcare systems across America are using AI tools through the MyChart patient portal to write messages to patients. While some of these messages disclose that AI is involved, many don’t. Even though these systems aren’t technically considered Shadow AI, the growing reliance on large language models (LLMs) like ChatGPT shows just how appealing these tools have become, even for top physicians.
A Harvard study revealed something surprising: ChatGPT provided better answers than doctors about 80% of the time, and it also scored higher on empathy (45%) compared to physicians (a shocking 4.6%). But Dr. Kelvas is clear that the solution isn’t to ban AI use altogether. “Instead of preaching abstinence, hospitals should offer approved, secure AI tools,” she said. “AI boosts productivity, so we need to embrace it—just safely.”
Dr. Kelvas also suggested that peer-led learning sessions could be a powerful way to educate healthcare professionals on security risks. “A 20-year-old is more likely to listen to another 20-year-old about technology than they are to a baby boomer.”
Shadow AI Threatens Software Development and DevSecOps
Shadow AI also presents major risks in software development, especially in the context of DevSecOps—a workflow where security is built into every step of the software production process. As developers turn to unapproved AI apps to speed up their work, they run the risk of introducing serious vulnerabilities into their code.
Matias Madou, Co-Founder and CTO of Secure Code Warrior, explained to Techopedia that developers are under increasing pressure to deliver better software faster. With tools like ChatGPT at their fingertips, many new developers are skipping the essential step of reviewing code for vulnerabilities, which can open the door to security breaches. “If Chief Information Security Officers (CISOs) don’t even know these AI tools are being used, they can’t protect against the risks,” Madou said.
Madou believes the solution lies in creating a more open-minded work environment where AI use is acknowledged and monitored. This would allow security teams to see which AI tools are being used and take steps to manage the risks. “A ‘security-first’ culture is key,” he said. “Employees need to be encouraged to ask for permission rather than forgiveness.”
So, What Can Be Done? Building a Safer AI Future
Experts agree that fostering a transparent and open relationship between employers and employees is crucial for addressing the risks of Shadow AI. Creating a culture that values safety, innovation, and continuous learning is an essential first step.
But culture alone isn’t enough. Arti Raman, CEO of Portal26, a company focused on AI risk management, emphasized that organizations also need technological solutions. “To control the use of unapproved AI apps, companies need to invest in AI visibility, governance, and education,” Raman explained. By using centralized AI monitoring tools, businesses can track AI usage across their networks and identify potential security gaps.
Raman pointed out that AI forensics tools can also provide valuable insights into how AI is being used within an organization. By understanding which tools employees are using and for what purpose, companies can better protect sensitive data and avoid major security breaches.
The Bottom Line
As Shadow AI moves into high-risk industries like healthcare and software development, the stakes are getting higher. Ignoring the risks of unapproved AI use is not just dangerous—it’s costly. A strong culture of openness, paired with the right technology, is the key to managing these risks and ensuring that AI works for companies, not against them. Experts are clear: action is needed, and it’s needed now.