I’m sure we’ve all heard some version of the same question: “Can we use AI for this? What about that?” The hype is everywhere. Today, leveraging AI in a cybersecurity program isn’t just common but becoming non-negotiable.
For leaders tasked with protecting complex environments, the promise of continuous testing and automated defense is compelling. However, that promise should be rooted in reality.
As a CISO, I’ve had the opportunity to observe how AI is being adopted across offensive and defensive practices. What I’ve seen reinforces a clear truth: AI amplifies capability, but it does not replace human judgment. Understanding where it adds value, and where it fails, is essential to building resilient security programs that actually reduce risk.
Where AI Brings Tangible Benefits
AI’s most meaningful contributions in security come from its ability to scale human effort, not eliminate it. The human element is invaluable and isn’t going away. Here are a few of the areas where it delivers real impact:
- Faster Reconnaissance and Data Correlation:
AI models excel at ingesting large datasets, correlating signals, and highlighting patterns that might take humans far longer to see. In environments with sprawling cloud resources, microservices, and hybrid infrastructure, this helps narrow the field quickly. - Assisting with Triage and Prioritization:
Security teams consistently struggle with alert volume, whether from SIEMs, scanners, or bug bounties. AI can help identify where there is likely meaningful substance versus noise, enabling teams to focus precious human cycles on high-impact areas. - Augmenting Reporting and Developer Handoff:
Generating clear, actionable findings is often as challenging as discovering them. AI can help synthesize technical output into structured observations that are easier for developers to consume and act upon.
These contributions aren’t theoretical. They’ve changed how many security teams operate. But it’s important to understand their scope and their limits if we’re to use them effectively.
Where AI Still Falls Short, and Why It Matters
Despite progress, AI-led automation has clear limitations that matter in practice:
- Weakness in Understanding Context:
Models are extremely good at echoing patterns they’ve seen, but they don’t understand the architecture, logic flows, or business rules that define real systems. This can mean the difference between surface noise and true exploitation paths. - High False Positive Rates in Complex Scenarios:
In application logic and access control testing, AI still struggles to distinguish between benign conditions and genuine exploit chains. This creates a heavy overhead for analysts and can overwhelm triage teams rather than assist them. - Scalability Without Insight Is Risky:
AI can surface large volumes of potential issues quickly, but without human calibration, that scale becomes a liability. A pentest that returns hundreds of low-impact flags is not the same as one that uncovers a critical chain of vulnerabilities. - Blind Spots in Emerging Attack Surfaces:
Certain classes of risk, especially around AI-driven applications themselves, remain poorly served by current models. Issues like prompt manipulation, agentic workflows, and unpredictable model behavior often require creative thinking that goes beyond pattern matching.
These limitations aren’t minor inconveniences. They shape the strategic decisions security teams make about tooling, process, and investment.
AI: A Capability Multiplier — Not a Replacement
For modern security programs, the question is not “Should we use AI?” but rather:
How do we integrate AI in a way that enhances human expertise without introducing blind trust?
A clear pattern has emerged in teams that are succeeding:
- AI handles scale; humans validate context.
- AI alerts; humans determine impact.
- AI highlights patterns; humans judge exploitability.
The Human Advantage Still Matters Most
There are aspects of offensive security that AI will likely never automate fully:
- Understanding business logic and how workflows can be chained into real exploits.
- Contextual analysis, like whether an elevated privilege on a dev environment really exposes production data.
- Tactical creativity in threat emulation, where human intuition anticipates attacker behavior beyond textbook patterns.
Human experience is where strategic understanding and risk prioritization reside. This is why the most effective security teams still invest deeply in human talent alongside automation tools.
A Framework for AI Adoption in Security
For leaders considering or scaling AI use in their security programs, here are practical guidance points:
- Define Clear Roles for AI and Humans:
Before deploying any AI tool, articulate what it should automate and what requires human judgment. - Evaluate Models Based on Data Quality:
AI systems trained on noisy or synthetic data tend to reflect that noise. Prefer models grounded in human-validated security data. - Integrate AI Outputs into Existing Workflows:
AI should improve your workflows, not introduce parallel complexity. For example, integrate insights into issue tracking and developer triage processes. - Measure What Matters:
Shift performance metrics from volume (e.g., bugs found) to impact (e.g., reduction in exploitable risk).
Looking Ahead
AI will continue to reshape how security teams operate, but resilience will not come from automation alone.
It will come from thoughtful design: programs that pair machine scale with human judgment, that prioritize meaningful validation over raw volume, and that treat context as the most valuable signal of all.
The organizations that succeed won’t be the ones who adopt the most tools or chase the most hype. They’ll be the ones who invest in clarity around their attack surface, real risk, and where AI meaningfully supports their teams versus where it must be challenged.
AI is a powerful accelerator. But security remains, at its core, a human discipline. We shouldn't lose sight of that fact.
