Aikido
Report

2026 State of AI in Security & Development

This report captures the voices of 450 security leaders (CISOs or equivalent), developers, and AppSec engineers across Europe and the US. Together, they reveal how AI in cybersecurity and software development is already breaking things, how tool sprawl is making security worse, and how developer experience is directly tied to incident rates. This report combines quantitative data with perspectives from senior practitioners responsible for real-world security outcomes.

Key Findings

  • 69% of organizations found vulnerabilities introduced by AI-generated code

  • 1 in 5 experienced a serious security incident linked to it

  • Teams using more security tools were more likely to suffer incidents

  • Only 21% believe AI will ever write secure code without human oversight

Summary

AI now writes a significant share of production code, but security practices have not kept pace. Incidents are common, false positives are widespread, and tool sprawl is slowing remediation.

The data is paired with expert commentary from security and engineering leaders, including CISOs, CTOs, and heads of engineering, to explain what the numbers mean in practice.

Contributors include leaders from BP, Lovable, the UK Cabinet Office, PSG and Serko.

What you’ll learn

How AI is changing real security risk, why tool sprawl increases incidents, and what leading teams do differently to reduce breaches without slowing development.

Written by:
Sooraj Shah

Sooraj Shah is Content Marketing Lead at Aikido Security. He has a background as a journalist for publications such as the BBC, the FT, Infosecurity Magazine and SC Magazine, and as a content marketer for B2B tech companies and start-ups.

Key Findings

  • 69% of organizations found vulnerabilities introduced by AI-generated code

  • 1 in 5 experienced a serious security incident linked to it

  • Teams using more security tools were more likely to suffer incidents

  • Only 21% believe AI will ever write secure code without human oversight

Summary

AI now writes a significant share of production code, but security practices have not kept pace. Incidents are common, false positives are widespread, and tool sprawl is slowing remediation.

The data is paired with expert commentary from security and engineering leaders, including CISOs, CTOs, and heads of engineering, to explain what the numbers mean in practice.

Contributors include leaders from BP, Lovable, the UK Cabinet Office, PSG and Serko.

What you’ll learn

How AI is changing real security risk, why tool sprawl increases incidents, and what leading teams do differently to reduce breaches without slowing development.

Based on research with 450 CISOs, security leaders, developers, and AppSec engineers across Europe and the US.

AI is changing how software is built. It is also increasing security risk. Faster development, more tools, and AI-generated code are exposing weaknesses in how teams operate.

This report shows what security and engineering teams are dealing with in 2026.

Inside the report:

AI and real security incidents
How AI-generated code is already linked to incidents and where accountability sits when things go wrong.

Developer experience and risk
Why false positives and alert fatigue lead teams to bypass security controls.

Tool sprawl and fragile teams
How fragmented security tools and reliance on a few key engineers increase incident risk.

What works in practice
How stronger teams reduce noise and build security into developer workflows.

Includes an executive summary and deeper findings for security and engineering leaders.

Built by Aikido Security.

Written by:
Sooraj Shah

Sooraj Shah is Content Marketing Lead at Aikido Security. He has a background as a journalist for publications such as the BBC, the FT, Infosecurity Magazine and SC Magazine, and as a content marketer for B2B tech companies and start-ups.