WHITEPAPER
The Offensive Security Blueprint: A Guide to Building a Modern, Strategic Program
WHITEPAPER
The Offensive Security Blueprint: A Guide to Building a Modern, Strategic Program

Velocity vs. Vulnerability: Why AI-Generated Code Demands Human-Led Security

The appeal of AI coding assistants is undeniable. For engineering leaders and developers, these tools promise a future of unprecedented velocity and efficiency, a world where tedious, low-level coding errors are a thing of the past. It’s a clear win for any team focused on building and shipping faster.

But a compelling new wave of research presents a wake-up call. Recent findings from Apiiro reveal a dangerous pitfall of productivity gains from AI: while development speeds up and simple flaws decrease, there's an exponential increase in more complex, severe vulnerabilities. This isn't just a coding problem—it's a fundamental business risk and a software supply chain crisis in the making.

This trade-off—of solving simple problems while creating more sophisticated ones—demands that we critically re-evaluate not only how we build software, but how we secure it. It forces us to ask a crucial question: If AI is creating these nuanced flaws, can we really trust AI alone to find and fix them?

AI Velocity Creates Digital Supply Chain Risk

The research from Apiiro details how accelerated development is leading to a 10x prevalence of high-impact vulnerabilities that are harder for automated tools to detect. We're talking about the types of vulnerabilities, often buried in business logic or requiring chained exploits, that elude simple scans and lead to major breaches. As security leaders, we cannot afford to overlook this business risk. 

This magnifies a threat that already keeps CISOs up at night: the digital supply chain. With two-thirds of security leaders already worried about risks from third-party software, according to our research, the widespread adoption of AI coding assistants means organizations are systematically introducing a new class of risk into their products—a risk they pass down the chain, indefinitely. 

A more "intelligent" supply chain, built with AI, requires an entirely new level of scrutiny of this rapidly accumulating security debt.

Cobalt Research Confirms Growing AI Security Blind Spot

Apiiro's findings on code generation align perfectly with what our extensive pentesting data at Cobalt reveals about the insecurity of AI systems themselves. The Cobalt State of Pentesting Report 2025 shows that AI applications and LLMs present a disproportionate risk to the enterprise. In fact, they have the highest rate of serious vulnerabilities of any asset type we test, with 32% of all findings classified as high-risk.

But the data on fixing these flaws reveals a deceptive and dangerous reality about the speed and effectiveness of organizations as they set about fixing these vulnerabilities.

  • The bad news: Only 21% of serious AI and LLM vulnerabilities are ever fixed—the lowest remediation rate of any category we track. This is creating a massive and persistent security blind spot in the applications organizations rely on.

  • The "good" news: The few serious flaws that are fixed have the fastest median time to resolve (MTTR) of any asset type.

This isn't a success story about speed. It tells us that teams are quickly addressing the low-hanging fruit, while the vast majority of complex vulnerabilities—nearly 80%—are being left unresolved. Often, it’s because the flaw lies within a third-party AI model, leaving the organization with a risk it cannot control. This is the digital supply chain risk in action.

And so a flood of AI-related security debt is pouring into a system that is already overwhelmed. Across all our pentests in 2024, the MTTR for serious findings was 37 days. While that's an improvement from past years, it's still more than double the typical two-week SLA that three-quarters of organizations have in place. Development and security teams are already drowning.

The flawed logic of simply fighting AI with AI misses the point entirely. Adding AI-powered scanners to find flaws faster doesn't address the core bottleneck of remediation; rather it adds more noise to an overloaded system. The challenge isn't just finding vulnerabilities faster—it's providing the actionable context needed to fix them before adversaries can exploit them.

The AI Security Fallacy: That It Can Fix Its Own Mistakes

It's important to distinguish between AI security (the discipline of securing the AI models themselves) and securing with AI (the hyped-up solution of using AI for defense).

The critical flaw in relying on AI scanners is that they are optimized to find the very low-hanging fruit and known patterns that AI coding assistants are already helping developers eliminate.

The new vulnerabilities being introduced by AI—subtle business logic flaws, indirect prompt injections, and complex access control issues—are of a different class. They require creativity, contextual business understanding, and an adversarial mindset to uncover. These issues are precisely where automated tools are weakest and human experts are strongest. 

Automated scanners are essential to the pentesting process, and human pentesters work best when leveraging these tools. Yet relying solely on an AI scanner to secure AI-generated code creates a dangerous blind spot.

The Path Forward: Human Expertise, Amplified by AI

This brings us to the core of our philosophy at Cobalt. While others look to replace people with scanners, Cobalt uses AI to empower our pentesters, in order to deliver the creative, high-impact findings that only a human can uncover.

We call this approach human-led, AI-powered pentesting.

Here’s how it works:

  • AI for efficiency: Our platform uses AI to perform the work that it does best, for efficiency, scale, and analytics. This includes AI-powered scoping,, AI analysis of prior findings with suggested actions, AI assisted report writing, and AI-driven benchmarks and insights from our unmatched 5,000 annual pentests.
  • Humans for impact: This frees our global community of over 450 pentesters to do what they do best: think like an attacker, dive deep into business logic, and find the sophisticated flaws that automated tools will always miss.

The effectiveness of our AI is built on a powerful data moat. It is trained on the world's largest proprietary dataset of offensive security findings, from thousands of real-world pentests. With every subsequent test, both our human experts and our AI assistant get smarter.

Recommendations for Security Leaders

The rush to innovate is creating a new class of risk that cannot be managed by AI alone. A proactive, human-centric approach to security is not just recommended—it is essential.

Here are four actions you can take today:

  1. Treat AI assistants as a supply chain risk. Mandate secure development training for any team using these tools, and revise your vendor risk assessments to explicitly cover their use of genAI.
  2. Pressure-test your pentesting program. If your program relies heavily on automated scanning, you are likely missing this new wave of AI-induced vulnerabilities. You must supplement automated scans using creative, human-led testing.
  3. Invest in human expertise, not just automation. The future of security isn't about AI replacing people; it's about AI empowering the right people. Prioritize security partners who combine deep human expertise with smart, efficient technology.
  4. Shift from finding flaws to reducing risk. Focus on the business impact of vulnerabilities, not just the raw count. An AI-powered and human-led pentest provides the critical context needed for this risk-based approach, helping you prioritize what truly matters to your organization.

Learn more about how organizations are responding to the latent challenges of third-party code—download the CISO Perspectives Report on AI and Digital Supply Chain Risks.

CISO Perspectives Report 2025

Back to Blog
About Gunter Ollmann
Gunter Ollmann serves as Cobalt's Chief Technology Officer (CTO). With rich and diverse experience in cybersecurity innovation, Ollmann leads Cobalt's technology and services strategy, delivering AI-enabled offensive security solutions coupled with unmatched human ingenuity. More By Gunter Ollmann