What 1,000 Global Cybersecurity Leaders Think About GenAI in 2025

Generative AI (GenAI) is like a double-edged sword in the world of cybersecurity. It's both a helpful ally and a potential threat. But, whether security teams like it or not, adopting it is becoming a must. It's already here, and it’s clear that it's going to play a role on both sides of the cybersecurity battle. The key is to figure out how to use it wisely.

What 1,000 Global Cybersecurity Leaders Think About GenAI in 2025


But what exactly does "wisely" mean when it comes to GenAI? To find out, CrowdStrike surveyed over 1,000 cybersecurity leaders and practitioners from around the world. They shared their thoughts on how they plan to adopt GenAI, what they need from it, and their biggest concerns about using it in cybersecurity.

Here’s what we learned:

Key Takeaways:
  • 80% of the respondents prefer GenAI to be integrated directly into cybersecurity platforms, rather than standalone tools.
  • 76% want GenAI tools that are specifically designed for cybersecurity, rather than general-purpose ones.
  • Most security experts believe that GenAI won’t replace humans, but rather enhance the work they do.
  • The biggest concern for security teams isn't just the cost—it’s being able to prove that the return on investment (ROI) is worth it.
  • Experts also want to see strong safety and privacy features in GenAI tools to make sure they're secure to use.

This insight reveals how cybersecurity professionals are navigating this fast-evolving technology, balancing excitement with caution as they prepare for the future.


Cybersecurity Experts Embrace Platform-Based AI for Defense

In 2025, cybersecurity professionals are still navigating the early stages of using Generative AI (GenAI) for security. But one thing is clear—they know it’s time to invest.

A recent survey revealed that 64% of security teams have already purchased a GenAI tool, and nearly 70% plan to buy one this year.

But purchasing a tool isn’t just about having the latest tech. Security teams are focused on how well these AI solutions integrate with their existing defenses. A tool’s effectiveness isn’t measured in isolation—it has to fit seamlessly into their broader security ecosystem.

In fact, 63% of companies are open to replacing their security tools entirely if it means gaining access to a better GenAI-powered system. This highlights a growing preference for a platform-based approach—where security tools, data, and processes work together within a unified system.


Why Security Teams Prefer AI Built into a Platform

CrowdStrike, a leading cybersecurity company, explains why this approach matters:

“When GenAI is integrated directly into a security platform, it enhances efficiency, simplifies onboarding, and allows analysts to interact with their tools more naturally—such as using plain language commands.”

Beyond that, embedding AI into a platform makes deployment easier and reduces the complexity of purchasing multiple disjointed tools.


The Real Reason Behind the GenAI Push: Security Breaches

Cybersecurity teams aren’t adopting GenAI just for convenience—it’s a necessity. A staggering 74% of organizations have either suffered a cyberattack in the last 12–18 months or fear they’re vulnerable to one.

Their biggest concern? AI-driven cyber threats are evolving fast. Cybercriminals are already using AI to create smarter, more sophisticated attacks.

Kaustubh Medhe, VP of Research & Cyber Threat Intelligence at Cyble, warns:

“AI-generated malware is advancing rapidly. Traditional detection methods won’t be enough. Defenders will need equally sophisticated AI-driven solutions.”

Because of this, companies prioritize cybersecurity expertise over general AI leadership when choosing vendors. Strong threat intelligence, incident response capabilities, and deep security experience are seen as far more critical than a vendor’s broader AI research or partnerships.

CrowdStrike reinforces this point:

“One-size-fits-all AI tools won’t cut it. 76% of security teams demand GenAI specifically built for cybersecurity—designed by experts to support real-world security operations.”


 

AI Isn’t Replacing Security Jobs—It’s Enhancing Them

Despite the rise of GenAI in cybersecurity, security teams aren’t worried about losing their jobs. The fear of an AI-powered "autonomous SOC" (Security Operations Center) isn’t a real concern.

Instead, teams believe GenAI will help ease skill shortages, boost productivity, and reduce burnout—not replace human expertise.

Rizwan Patel, Head of Information Security & Emerging Technology at Altimetrik, explains:

“The goal is to integrate GenAI into security operations—not to replace human analysts, but to enhance their ability to respond faster and more effectively.”

A study by Indeed supports this, analyzing over 2,800 job skills and finding that:

  • None were classified as ‘very likely’ to be replaced by GenAI.
  • Nearly 69% of skills were deemed ‘very unlikely’ or ‘unlikely’ to be replaced.

This aligns with how cybersecurity teams view AI. They see it as a tool that improves workflows, helps analysts onboard faster, and reduces the time spent on tedious tasks.


How Security Teams Are Using GenAI

Organizations are already applying GenAI to enhance cybersecurity operations. Here are the top seven ways security teams are leveraging AI:

  • Threat intelligence analysis & summarization – AI helps analysts quickly process and understand complex security threats.
  • Assisted detection investigation & analysis – AI provides deeper insights into security incidents.
  • Automated response & workflow implementation – AI can take immediate action against threats.
  • Assisted vulnerability management & patching – AI identifies and helps fix security weaknesses.
  • Self-service security answers for IT & engineering teams – AI reduces the burden on security teams by answering routine queries.
  • Writing & editing queries or scripts – AI speeds up scripting for security automation.
  • Onboarding new analysts & answering product questions – AI helps train security teams more efficiently.


The Future: AI as a Cybersecurity Ally

The message from security experts is clear—GenAI is not just another tech trend. It’s a critical tool in the fight against increasingly intelligent cyber threats.

While it won’t replace human analysts, it will empower them, making security operations faster, smarter, and more effective.

Organizations that integrate AI-driven security solutions now will be far better prepared to defend against the next wave of AI-powered cyberattacks.


Measuring ROI: The Biggest Economic Concern

When it comes to adopting generative AI, companies are grappling with a big question—will the investment pay off? The three biggest financial concerns revolve around measuring return on investment (ROI), high licensing costs, and unpredictable pricing models.

A majority of 1,000 cybersecurity professionals believe that AI-powered platforms can help organizations see faster financial returns. By consolidating tools, companies can cut procurement costs, reduce security threats, minimize training time, and lower maintenance expenses—all leading to major savings.


The Need for Built-in Safety Measures

Ramesh Nampelly, Senior Director of Cloud Infrastructure and Platform Engineering at Palo Alto Networks, compares AI to a toddler—curious, unpredictable, and capable of causing chaos if left unchecked. He explains:

"Employees are diving into AI tools like DeepSeek to boost productivity. But without proper security, things can spiral—leading to data leaks, compliance mishaps, and unintended risks."

There’s an ongoing debate among cybersecurity experts about whether the benefits of generative AI outweigh its risks. Many agree that strong security and privacy controls are crucial, with the top concerns being exposure of sensitive data and potential cyberattacks targeting AI tools.

As AI adoption continues to grow, companies are putting more emphasis on security frameworks and policies to guide responsible use. Nearly 9 in 10 organizations (87%) have already implemented or are in the process of developing security policies to regulate AI usage over the next year.


The Takeaway

Employees are eager to embrace AI, leveraging tools like DeepSeek to enhance productivity and creativity. However, without the right security safeguards, they could unknowingly put sensitive data at risk, leading to compliance violations and other unintended consequences.

"Just like you wouldn’t let a child roam freely without supervision, businesses need to guide and secure AI use. The goal is to encourage innovation while keeping risks in check," says Anand Oswal, Senior Vice President and GM at Palo Alto Networks.

AI is becoming an integral part of cybersecurity, and its adoption is inevitable. But with the right security guardrails in place, businesses can maintain control, ensuring that AI is used responsibly and safely.



FAQs

What are the biggest security risks when adopting GenAI?

When using GenAI, some major concerns keep security experts on edge. There's the risk of exposing sensitive data to the AI model, making it vulnerable to misuse. Then, there’s the threat of cyberattacks specifically targeting GenAI tools. Without strong safeguards in place, these systems can also generate misleading or harmful information—often called AI hallucinations. And to top it off, the lack of clear policies and regulations around GenAI adds another layer of uncertainty.


What makes companies invest in GenAI for security?

A recent survey by CrowdStrike revealed three main reasons why companies are turning to GenAI for security. First, these tools help detect and respond to cyber threats faster and more effectively. Second, they boost operational efficiency by automating complex security tasks. And finally, with a growing shortage of skilled cybersecurity professionals, GenAI helps fill the gap by handling tasks that would otherwise require human expertise.