A CISO from a multi-billion-dollar enterprise we recently spoke to made an observation that stuck with me. The security capability they valued most was not another detection tool or framework. It was speed.
For them, the speed of identifying issues, validating fixes, and responding to change was top of mind. With so many fires to fight, it also suggests time is becoming an operational constraint.
Cybersecurity expert Phil Venables recently suggested that security programs will increasingly succeed or fail based on speed.
“Speed is all. Particularly, how defenders can run their OODA (observe, orient, decide, act) loop faster than attackers can adapt”.
This isn’t another “AI is causing this mess” article, though - let’s confront the broader reality. In our research of 200 CISOs and 200 engineering leaders (including CTOs), 76% of organizations deploy significant updates weekly or faster, and 39% deploy daily. Modern engineering teams deploy changes continuously.
But security validation doesn’t operate on that cadence; only 21% validate security on every release. This is more than a misalignment, it’s a structural failure.
Pace Layers
Venables points to Stewart Brand’s “Pace Layers” framework to explain how complex systems evolve.
“The fast layers bring novelty and experimentation, and the slow layers provide stability and memory.”
Software delivery is the fast layer, while governance and validation are the slow layer. Modern software environments are pushing these layers to drift apart even further. Engineering teams are cranking up the speed at which they operate, while security validation still operates on quarterly or annual cycles.
Security results arrive after the system has changed, so what’s the point?
What happens as a result of this mismatch is inevitable; 85% say security findings are outdated by the time reports arrive at least sometimes, with nearly half (48%) of those saying this happens very often or all the time.
In other words, by the time test findings are reviewed, the system they describe has already changed. In effect, teams are making security decisions based on a past state of the system. Even small changes can shift a system’s security posture. A new endpoint, a tweak to authorization logic, or an updated dependency can open up entirely new attack paths. Pentesting is often validating a version of the system that no longer exists.
As a result, suggested fixes and retesting may well eke out older security issues, but as the software has moved on, teams begin to lose trust in the signal that the test has provided them with because that signal is late.
The result is a bit like the layered time mechanics in Inception: different parts of the system now operate at very different speeds, and actions taken too late in one layer may have little effect on what is already unfolding in another.

Speed shouldn’t come at the expense of depth
For organizations seeking an improved security posture, validation needs reasoning, exploration, exploit confirmation, and workflow analysis. It needs to be thorough (but you already knew that).
“Static analysis can only go so far. If you can’t validate the issue against the running application, it’s only a hypothesis,” says Philippe Dourrasov, AI pentest lead at Aikido Security.
This is why meaningful security testing has historically taken time.
And so, the challenge is not to increase the number of tests, but to preserve the depth of real offensive testing while operating at a much higher tempo.
However, periodic testing’s limitations with depth are clear. In our research, 51% of respondents believe deeper vulnerabilities, such as logic flaws, broken access controls or multi-step attack paths, are missed always or often.
This isn’t because security teams lack expertise. Instead, it’s because pentesters and red teams are up against a number of constraints: time-boxed testing, expanding systems, increasing complexity, and interconnectedness of systems. But at its core, the breadth-versus-depth dilemma is really a problem of time.
Validation must follow meaningful change
Organizations aren’t blind to the growing mismatch between delivery speed and security validation. Many have tried to close the gap by running more pentests throughout the year, scanning continuously, or compressing manual engagements to fit tighter release cycles. In theory, these approaches increase speed. In practice, they often just shift the compromise somewhere else.
The better shift to make is aligning validation with how software actually changes. That doesn’t actually mean moving to testing everything constantly or continuously, but enabling validation to respond when meaningful change happens. In other words, validating changes that actually introduce risk. The challenge is that most existing testing approaches can’t operate at this level of responsiveness. Traditional pentests take days or weeks to schedule, execute, and report. Even the fastest engagements can’t realistically keep up with changes happening daily or multiple times a day. The changes aren’t merely cosmetic, but things like new API endpoints, changes to authorization logic, agent workflows, or new third-party integrations.
What this means operationally for security teams is that validation must become part of the delivery pipeline, not a bolted-on event. In turn, this shifts the focus from testing entire systems periodically to validating systems incrementally.
{{cta}}
The need for speed
For decades, organizations have focused on increasing output: more code, more features, more applications. Those teams that ship faster gain a competitive advantage. Venables argues that the same principle increasingly applies to security.
To truly benefit from the speed of modern software development, security must move at the same pace. Organizations that can detect issues, validate fixes, and respond to change faster gain the advantage - not just over attackers, but over competitors building in the same space. Those that can’t will increasingly find themselves operating on outdated assumptions about their own systems.
When different layers of a system move at different speeds, security that reacts too late may already be operating in the wrong time frame, much like in Inception, where actions taken too late in one layer have little effect on what is already unfolding in another.

