WebValid
• WebValid Team

WebValid Catches the First Domino: How AI Breached McKinsey, BCG, and Bain

AI Coding Security JavaScript Vibe Coding API Keys

This article analyzes real-world security breaches at McKinsey, BCG, and Bain & Company disclosed by CodeWall in March–April 2026. The focus is on client-side vulnerabilities — hardcoded secrets in JavaScript bundles and missing security configurations — that automated scanning tools detect before production deployment.

An autonomous AI agent just breached all three members of the “Big Three” global consultancies. McKinsey. BCG. Bain. The world’s most prestigious advisory firms — brought down by hardcoded secrets baked into production JavaScript. Not by a nation-state APT group. Not by a zero-day exploit. By an AI agent that read their public JS bundles and found the keys right there in the source code.

In April 2026, security startup CodeWall published the final chapter of a three-part series documenting how their autonomous offensive agent compromised the AI platforms of all three MBB firms. The pattern was identical every time: publicly accessible API documentation, a credential or endpoint left unprotected, and a SQL injection that opened the entire database. The total damage across three engagements? Billions of rows of confidential data — exposed in minutes to hours.

The most preventable of these attack chains started with a client-side artifact that a standard bundle scanner would have flagged—stopping the ‘first domino’ before attackers could escalate. While some chains exploited missing authorization, the most egregious started with a client-side artifact proving that many teams ship their ‘front door keys’ right in the JavaScript. If you ship AI-generated frontend code without scanning the compiled output, you are exposing the very artifacts that attackers use to start their breach.


The Attack Pattern: Three Firms, One Playbook

🔴 Critical · Full database compromise · OWASP A02:2021 Cryptographic Failures

All three breaches followed the same attack pattern. Understanding it is the first step toward not repeating it:

McKinsey — Lilli (March 2026) CodeWall’s agent mapped 200+ API endpoints from McKinsey’s publicly accessible API documentation. It found 22 endpoints that required zero authentication. Through one of these, the agent discovered a SQL injection vulnerability — JSON field names were concatenated directly into SQL queries without sanitization. Within two hours: full read-write access to the production database. The haul: 46.5 million chat messages, 728,000 confidential files, 57,000 user accounts, and 95 writable system prompts. Writable prompts meant the agent could silently alter how the AI responded to 40,000+ consultants — without deploying new code.

BCG — X Portal (March 2026) The agent documented 372 API endpoints on BCG’s X Portal. One endpoint accepted raw SQL without any authorization. Behind it: 3.17 trillion rows of data across 131 terabytes. No authentication. No rate limiting. No input validation.

Bain — Pyxis (April 2026) This one took 18 minutes. The agent downloaded the frontend JavaScript bundle and found a service account’s username and password hardcoded directly in the source code — likely a developer credential that slipped into the production build. Using these credentials, the agent authenticated to the platform and discovered a SQL injection in an API endpoint. The result: 159 billion rows of consumer transaction data, 2.5 billion rows of commercial intelligence, 9,989 conversations with the platform’s AI chatbot, and 36,869 JWT tokens with 365-day lifetimes and no multi-factor authentication.

The common denominator? Every breach started with something publicly visible — API docs, unauthenticated endpoints, or credentials sitting in a JavaScript file that any browser could download and read.


Bain’s Fatal Mistake: Credentials in the JavaScript Bundle

🔴 Critical · Credential exposure in client code · OWASP A02:2021 Cryptographic Failures

The Bain/Pyxis breach deserves special attention because the entry point was the simplest — and the most preventable. A service account’s login and password were hardcoded in the frontend JavaScript bundle. This is the exact same class of vulnerability that vibe-coders create daily when AI assistants “fix” broken API calls by adding the NEXT_PUBLIC_ prefix to secret environment variables.

The pattern is universal:

// ❌ What likely happened in the Pyxis codebase
// A developer or AI assistant hardcoded credentials during development
const PYXIS_SERVICE_ACCOUNT = {
  username: "svc-pyxis-prod",
  password: "internal-credential-here",
};

async function authenticate() {
  const response = await fetch("/api/auth/login", {
    method: "POST",
    body: JSON.stringify(PYXIS_SERVICE_ACCOUNT),
  });
  return response.json();
}

When this code runs through a bundler (Webpack, Vite, Turbopack), the credentials become string literals in the compiled .js file. Anyone can press F12, open the Sources tab, and extract them. No hacking required — the browser serves them for free.

// ✅ How credentials should be handled
// Server-side only — never expose to the client
"use server";

export async function authenticateService() {
  // Credentials stay on the server, never bundled into client JS
  const response = await fetch(process.env.PYXIS_API_URL + "/auth/login", {
    method: "POST",
    body: JSON.stringify({
      username: process.env.PYXIS_SERVICE_USER,
      password: process.env.PYXIS_SERVICE_PASS,
    }),
  });
  return response.json();
}

This is not a novel vulnerability. It is OWASP A02:2021 — Cryptographic Failures, documented since the dawn of web security. But the speed of vibe-coding means developers skip the step where they verify what actually shipped in the compiled bundle. For a deep technical breakdown of how this happens in React and Next.js projects, see our guide: Leaked API Keys: How AI Compromises Developers During Code Generation.


Fact-Check: The Scale is Not Hypothetical

🔍 Public Case Study · Verified Disclosures · Responsible Disclosure Completed

Every number in this article comes from CodeWall’s published research and verified public disclosures:


The Gap in the Pipeline: Why Git-Scanners Miss Pre-Production Leaks

Bain’s security team wasn’t incompetent. McKinsey and BCG employ thousands of top-tier engineers. Yet all three missed the same vulnerability because they were looking at the wrong part of the pipeline.

The industry standard for security is SAST (Static Application Security Testing)—tools like GitGuardian or TruffleHog that scan your repository for secrets. They are excellent at finding a password you accidentally committed to config.ts. But they have a fatal blind spot: Build-time injection.

In modern web development (Next.js, Vite, Webpack), we use environment variables. In your source code, it looks like this:

const API_URL = process.env.NEXT_PUBLIC_API_URL;
const SERVICE_TOKEN = process.env.SERVICE_TOKEN;

A repository scanner sees this and gives it a pass. The secret isn’t in Git; it’s securely stored in a CI/CD vault (GitHub Secrets, Vercel Env, Jenkins). The code is clean.

But then the Build Step happens. The bundler takes your code and replaces those process.env references with their actual values from the environment. This is when the “clean” code becomes a “leaky” bundle. The resulting production JavaScript file contains the plain-text secret, but since this file is an ephemeral build artifact, it is never scanned by your repository-based tools.

This is WebValid’s killer feature. We don’t scan your “clean” source code; we scan your “dirty” production bundle—the final output that is actually served to the user (and the attacker). We catch the leaks that only appear after the build pipeline has done its job.


Why This Matters for Every Project: From Enterprise Sprawl to Vibe-Coding Speed

The Class-A failure at Bain, McKinsey, and BCG wasn’t caused by laziness; it was caused by Infrastructure Sprawl. In deep enterprise environments, secrets leak through:

For the modern Vibe-Coder (indies, startups, and rapid-prototypes), the destination is the same, but the path is different. You aren’t suffering from enterprise sprawl; you’re suffering from AI-Assisted Speed.

AI coding assistants (Cursor, Copilot, ChatGPT) are context-blind. They optimize for the file you are editing, not your entire production architecture. When a fetch call fails because an environment variable is undefined, the AI happily suggests: “Just add the NEXTPUBLIC prefix to make it accessible to the client.”

It works. You merge. You’ve just automated your way to a security breach.

Whether you are a consultant at Bain struggling with high-scale infrastructure or a lone developer vibe-coding a new SaaS, the fundamental risk is the same: You skip the step where you verify what actually shipped in the compiled output.


What Automated Bundle Scanning Catches

The Bain breach took 18 minutes. A bundle scan takes 20 seconds. Here is what automated security scanning detects — and where it stops:

Vulnerability CategoryWebValid Security ScannerExample from MBB Breaches
Hardcoded credentials in JS bundle✅ Detects secret patterns in bundle textBain/Pyxis — service account credentials
API keys and tokens in client code✅ Scans for Stripe, OpenAI, Google, JWT patternsBain — 36,869 JWT tokens
Missing security headers (CSP, HSTS)✅ Checks HTTP response headersAll 3 firms — missing or weak headers
Mixed content (HTTP on HTTPS pages)✅ Flags insecure requestsNetwork-level misconfigurations
SQL injection in API endpoints❌ Backend testing requiredMcKinsey, BCG, Bain — all had injectable endpoints
API authentication logic❌ Business logic review requiredBCG — zero-auth SQL endpoint
System prompt exposure❌ Not a client-side artifactBain — 18,621-char system prompt leaked

WebValid is a client-side security scanner. It audits the compiled frontend — the JavaScript bundles, HTTP headers, and network requests that your browser sees. It does not perform penetration testing, SQL injection fuzzing, or backend API auditing. It catches the first domino — the leaked credential or missing header that starts the attack chain.


Why Manual Audits are a Recipe for Failure

Even if you know where the leaks happen, catching them manually across a moving codebase is nearly impossible. Consider what a human must do for every deployment to match an automated scanner:

A vibe-coder ships multiple times a day. Enterprise teams ship hundreds of micro-services. No human process can audit the compiled output of every hotfix with 100% consistency. This isn’t a lack of discipline; it’s a lack of automation for the final mile of security.

Catch the First Domino Before it Falls

The Bain breach took 18 minutes. The initial leak could have been flagged in 20 seconds.

By catching exposed credentials and network misconfigurations in your bundles, you stop the attack before it can ever reach your backend. You don’t need a public production URL to protect yourself. Most modern development teams use Tunnels (like Ngrok or Cloudflare Tunnel) to audit their local builds or private staging environments before they go public.

  1. Launch your project (or local build) through a tunnel.
  2. Drop the tunnel URL into WebValid.
  3. Receive a ready-to-use ai-fix-prompt in Markdown — paste it into your AI assistant and fix the leaks before they even hit your production server.

Run a free security audit on your public or private site (via tunnel) right now:

→ Test your project for free on WebValid

Have questions about your audit results? Start auditing for free


Official Documentation

Security Standards

Case Studies & Reports

Framework Security

Was this article helpful?