Why Security Must Evolve for the AI Era
Software development is undergoing a rapid, structural change. The culprit? “Vibe coding.”
If you’re not yet familiar with the term, “vibe coding” is the process of building software using AI assistants, in particular Large Language Models (LLMs) like Cursor, Claude, and Replit. Rather than writing code manually (which often includes the reuse of open source packages) developers are now turning to AI-based tools to generate net-new code. The premise is that vibe coding saves hours, maybe even weeks, capitalizing on the desired velocity for software development while using AI’s “intelligence” to write code that is solid and reliable.
However, while speed gains are clear, a few concerning issues are starting to arise. Notably, while developers are beginning to feel confident in the technology’s ability to build software, the usual security checks and balances aren’t always present. Why? Vibe coding generates large chunks of software in one fell swoop, allowing developers to bypass the typical vulnerability scans and human reviews, the absence of which drastically increases the risk that a vulnerability will slip through early stages of the build process and land in production, where large-scale damage could occur.
Vibe coding is here to stay, but it is creating a crisis Application Security (AppSec) teams and Site Reliability Engineers (SREs): The old AppSec model is inadequate in today’s development process. The traditional AppSec model cannot keep pace; it is breaking under the weight of AI-driven velocity.
In this blog, the first in a series of three, we will explain how this crisis is defined by three non-negotiable forces introduced by AI and how these forces can be overcome with innovative thinking and new security strategies.
The Velocity Crisis: Days vs. Minutes
Cybersecurity has long been hampered by the disparity between the speed of the attacker and that of the defender. This chasm is only growing larger with the mass adoption of AI.
On one side, the adversary has weaponized velocity. Attackers use the same LLMs driving feature generation to automate reconnaissance and exploit scripting. Security research has already demonstrated the ease with which a sophisticated exploit can be generated and executed in mere minutes. AI-assisted code generation allows an attacker (even a less sophisticated one) to submit natural language code prompts to the LLM and create a functional proof-of-concept exploit in under 23 minutes.
On the flip side, defenders are hampered by the time it takes to accurately identify, thoroughly investigate, and appropriately respond to issues. For large enterprises using traditional AppSec tools, scanning, alone, could take many hours up to several days. Once issues have been surfaced, the security team then needs time to triage what is normally an overwhelming number of vulnerabilities, test patches, and prioritize fixes.
For organizations that integrate legacy Web Application Firewalls (WAFs) into the vulnerability management process, the average time to draft, test, and safely deploy a new rule to mitigate a sophisticated, zero-day threat is 18 days. Talk about a running head start for attackers.
Businesses cannot fight minutes with days, let alone days numbering in the double digits. The security function ceases to be relevant when the attacker’s time-to-exploit is executed before the defender’s time-to-mitigate even fully begins.
Given these parameters, legacy tools’ focus on "visibility" (aka filling dashboards with alerts) is not sufficient; it’s a solved problem that LLMs were able to democratize overnight. In this new paradigm, true security value and the hard technical challenge have shifted to the enforcement layer, where the goal is to securely stop a threat at runtime, without massive disruption to operations.
Attack Surface Expansion and the Vibe Attacker
Vibe coding is defined by its ease of use. Some also might say that it relieves humans of critical thinking. Suppose a user of AI chooses to outsource the entirety of a task, in line with the subject of this blog, developing software, to an LLM, tacitly accepting AI-generated code and skipping necessary aspects of review. In that case, the result is clear and rapid attack surface expansion. Because it’s AI, and speed is the central factor, the attack surface grows at such a rate that traditional tools and methods cannot compete.
Not only are the code components more risky than before, but anything else used in the generation and/or deployment and use of software becomes part of the attack surface. One of the areas of greatest concern is API security due to API sprawl. Whereas with traditional development, a human might have carefully scoped a few endpoints, an AI-assisted coder can inadvertently expose dozens of unreviewed microservices and APIs in hours. Every one of these new, unvetted endpoints becomes a target.
This creates fertile ground for “vibe attackers,” who leverage AI to automate exploit development and who are counting on developers to use AI to automate software development. These threat actors are also betting that vibe code is being written with minimal review, which essentially guarantees that vulnerabilities are more rampant and harder to triage than ever before.
Semantic Attacks Dominate Legacy Defenses
The AI-driven application ecosystem introduces an entirely new threat category that fundamentally invalidates signature-based security: semantic attacks.
The most prominent example is “promptware,” a form of prompt injection in which the attacker attempts to subvert an LLM application’s core purpose or extract sensitive system information. In contrast to typical prompt injection, promptware targets the application’s underlying data set to change the meaning and context of the input rather than inserting a malicious string.
The challenge is technical and architectural. Traditional WAFs rely on signatures and brittle regex rules to identify known patterns. These defenses are now rendered useless against promptware, which constantly shifts its structure to avoid simple, pattern-based matching. Defenders cannot protect a contextual, highly variable attack with a static rule set.
To outwit the system, AppSec must evolve to AI-native defenses that analyze intent over syntax. This structural shift requires new detection methods, including:
- Token-based Query Detection: Analyze prompts at the token level, where LLM input is broken down into constituent units to identify malicious intent. This provides high-accuracy detection and explicitly avoids brittle regex rules.
- Attack Embeddings Analysis (AEA): Detect prompt injection and sensitive data leakage by analyzing the semantic meaning and context of LLM application queries, making it far more resilient than pattern matching.
The next two articles in this series will dig deeper into these and additional methods for building AI-resilient application defenses.
In the meantime, the innovation crisis is clear: The legacy security model, defined by its slowness, overwhelming alert fatigue, insufficiency against sprawl, and inability to handle semantic attacks, is fundamentally incompatible with the AI-driven world. AppSec must transform into an engineering discipline driven by automation that can build defenses to match the speed and complexity of modern application development.
