What Your WAF Misses: Credential Stuffing

Impart Security
February 18, 2026
3
min

OAT-008, Credential Stuffing, is the mass automated testing of stolen username and password pairs against login endpoints. The credentials aren't guessed. They're sourced from breaches, purchased from criminal marketplaces, or pulled from public dumps, then tested at scale to find accounts where users reused the same password. When a match hits, the attacker has a valid session.

How the attack moves through the stack

The attacker starts with a credential list. Breached email/password pairs, typically millions of rows. The list is loaded into a stuffing tool (Sentry MBA, OpenBullet, custom scripts) and distributed across a proxy pool. Residential proxies are standard. Each IP sends a small number of requests, well under any per-IP rate limit.

Each request is a POST to your login endpoint with a valid-looking payload. Correct content type, realistic user agent, proper headers. The credentials themselves are real. They're just from a different breach. A single request is indistinguishable from a legitimate failed login.

The tell is in the aggregate. Hundreds of login failures across unrelated accounts, concentrated in a short time window, from IPs with no prior session history, with no upstream page navigation before the POST. That cross-request pattern is where detection lives.

Flowchart of how the attack moves through the stack

What happens when the alert fires

If you're running a bot vendor alongside your WAF, your detection tooling probably flags this activity. Credential velocity across accounts. Session anomalies. IP rotation patterns. The signal is real.

The problem starts when you try to do something with it.

The bot vendor writes to a dashboard. Your team sees the alert. Anomalous login activity, high confidence bot, thousands of attempts across hundreds of accounts. Everyone agrees it's real. Now someone needs to turn that into enforcement.

The only enforcement point most teams have is the WAF. So someone writes a rule. Maybe a rate limit on the login path. Maybe a geo-block on the top source countries in the campaign. Maybe a header filter targeting the most common user agent.

Every one of these is a rough approximation of what the bot vendor actually detected. The bot vendor saw credential velocity across 1,200 unrelated accounts from IPs with no session history. That's not a rule you can write in a WAF. The WAF doesn't have the session data. It doesn't correlate across accounts. It evaluates each request on its own.

So the rate limit catches legitimate users who mistype their password twice. The geo-block locks out real customers. The header filter works until the attacker rotates user agents, which takes minutes.

Most teams end up in the same place. The rule is either too broad and causes false positives, or too narrow and gets evaded immediately. The third option, which is what usually happens, is that the team leaves it in monitor mode and revisits next quarter. The detection was correct the entire time. The attack completes anyway.

Status quo path for bot protection using bot vendors to detect and WAFs to block

What we see when detection and enforcement share a request path

When we moved detection and enforcement into the same layer, inside the application's request path, the operational picture changed.

Chart comparing a WAF to protection as code

At this layer, you have access to the signals that actually reveal credential stuffing. You can correlate login attempts across accounts and track velocity in real time. You can observe whether a request came from a session that navigated to the login page or hit the POST endpoint directly with no prior interaction. You can track behavioral fingerprints across time, not just per-request headers. You can see what happens after authentication, whether a successful login immediately accesses PII, changes recovery settings, or extracts payment methods.

These signals are evaluated while the request is being processed. The system that builds the behavioral picture is the same system that makes the enforcement decision. There's no alert that needs to be translated into a rule. There's no handoff between tools.

Before any of this goes live, shadow mode runs against real production traffic. Every request is evaluated, every decision is logged, nothing is blocked. You see exactly what would be enforced, with the full evidence chain: which signals fired, what thresholds were crossed, which sessions matched. When the evidence matches what you expect, you promote the policy to active enforcement.

The policy itself lives in version control and deploys through CI/CD. You can scope it to a single endpoint, test it in shadow mode, review the decisions, and roll back in seconds if something looks wrong. Writing a credential stuffing detection policy looks like writing application logic, because it is.

Flowchart diagram of detection and enforcement in the same platform

The difference isn't better detection. Most bot vendors already detect credential stuffing well. The difference is that the detection and the enforcement decision happen in the same place, so the signal doesn't get lost crossing a tool boundary.

See what you would block before you block it.

Impart closes the enforcement gap for credential stuffing and all 21 OWASP automated threats. Shadow mode against live traffic. Full decision evidence. Enforcement when you're ready.

OWASP Deep Dive Series
This post is part of a 10-part series examining how OWASP automated threats expose the gap between detection and enforcement, and what changes when both move into the application's request path.
On This Page
Share this article:
Like this article?

Speak to an Impart Co-Founder to learn more about WAF and API Security!

Meet an Impart Co-Founder