WHITE PAPER
Secure the agentic shift and bridge the AI readiness gap with the Responsible AI Imperative white paper
WHITE PAPER
Secure the agentic shift and bridge the AI readiness gap with the Responsible AI Imperative white paper

The Scale Problem: Why 5,000 Pentests Is Really a Story About Velocity

At Cobalt, we recently passed the 5,000 mark for pentests conducted in the last year. Looking at this number alone sounds like a milestone. The more interesting, and more important, story is why that number is starting to make sense for a lot of modern organizations.

The reality is that modern software doesn’t move slowly, ship quarterly, stabilize between releases, or wait for security teams to catch up. How insensitive. But the reality is that software is constantly in motion.

Code ships daily, sometimes hourly, and infrastructure evolves continuously. New SaaS tools appear across teams without centralized oversight. APIs connect everything to everything. AI features are launched before the risks are fully understood. At the same time, connected devices, IoT systems, and embedded technologies are expanding the digital footprint of organizations far beyond traditional IT environments.

​​And yet, many security programs are still structured around traditional models designed for an entirely different era: slower release cycles, simpler architectures, and relatively static environments. That security methodology was perfect for its time, but does not work in the world we live in today.

The Velocity Gap Is Now the Core Security Problem

If you test once a year, you’re effectively making decisions about 12 months of risk based on a few weeks of visibility. That might have been acceptable when environments changed slowly. But in modern organizations, 12 months is an eternity, and a lot can happen in a year.

In that time, teams will:

  • Launch new customer-facing applications
  • Add new APIs and integrations
  • Migrate workloads between cloud providers
  • Introduce new vendors into the stack
  • Experiment with AI copilots and automation tools
  • Refactor core architecture
  • Expand into new regions with new compliance requirements

Each of those changes alters the attack surface, either subtly or significantly. The result is a growing blind spot between how fast risk evolves and how often it’s actually validated, and it’s that very blind spot where uncertainty creeps in. It’s where security leaders lose confidence in their own posture, even when metrics look positive!

Dashboards say “green.” Reports say “passed.” Audits say “compliant.” This isn’t to imply that those stats aren’t important or meaningful, but that doesn’t automatically make you secure.

This is the velocity gap: when your environment moves faster than your ability to meaningfully understand its risk.

What Actually Changes From Annual to Continuous Pentesting

When organizations begin testing continuously instead of periodically, the transformation isn’t just technical, but cultural. Security stops being an annual disruption and starts becoming part of normal operations.

Instead of:

  • Large backlogs of findings
  • Overwhelming remediation efforts
  • Frustrated engineering teams
  • Late-stage surprises

You start to see:

  • Smaller, more actionable findings
  • Faster turnaround on fixes
  • Earlier collaboration between security and engineering
  • Higher trust across teams

Remediation becomes incremental instead of overwhelming. Engineers stop seeing pentesting as an audit and start seeing it as feedback. Security leaders stop preparing for the “pentest season” and start building sustainable programs that deliver real business value.

Across thousands of engagements, the strongest programs aren’t the ones testing the most frequently just for the sake of frequency. They’re the ones that have simply aligned validation with how their business operates and what it requires in terms of security.

What “Normal” Looks Like in 2026

There’s no single ideal cadence for every organization. But there is a clear directional shift in high-performing programs.

“Normal” no longer looks like:

  • One big test per year
  • A final PDF delivered weeks after testing ends
  • Retesting treated as optional
  • Findings discussed long after context is lost

Instead, modern programs increasingly look like:

  • Testing triggered by meaningful change, not calendar dates
  • Validation embedded into release cycles
  • Retesting treated as expected hygiene
  • Findings discussed in real time, while context still exists
  • Security feedback happening while software is still being built

Some organizations test monthly, others test per major release. Some test continuously on critical assets. Most mature programs use a mix of all three.

In those environments, Cobalt running thousands of pentests a year across our customer base isn’t surprising. It’s the direct result of the complexity and pace of the systems being secured.

What 5,000 Pentests Actually Represents

So when we talk about performing more than 5,000 pentests in a year, the number itself isn’t the end goal, but a signal.

A signal that organizations are no longer satisfied with annual snapshots. A signal that security validation is becoming continuous rather than episodic. A signal that offensive security is being embedded into how modern software is built, not bolted on afterward.

It reflects a structural shift in how serious security programs now operate. The number isn’t impressive because it’s large. It’s meaningful because it tells a story about how the best teams are adapting to velocity.

Back to Blog
About Claire Bishop
Claire Bishop is the Social Media and Content Marketing Lead at Cobalt, where she owns the editorial calendar and leads content strategy across the company’s blog, social channels, and video programs. She partners closely with product marketing, demand generation, and design to ensure content supports business goals. Claire brings a strong background in B2B SaaS and cybersecurity marketing and holds a B.A. in English from the University of California, Davis. More By Claire Bishop