WebValid
WebValid Team

Leaked API Keys: How AI Compromises Junior Developers During Code Generation

AI Coding Security React Next.js Vibe Coding

Stack: React, Next.js App Router, Vite. Problem: Leaking secret keys and tokens into client-side JS bundles through incorrect environment variable usage in AI-generated code.

Introduction

Imagine a solo developer’s typical morning: coffee, a code editor open, and the joy of a new feature—AI integration in a SaaS product—working flawlessly. Vibe-coding in action: you asked an assistant to write a React client component, it produced the code, and data is rendering perfectly. You push to master and head to bed.

In the morning, you wake up to a Google Cloud bill for $82,314. In 48 hours. The reason? A single line of AI-generated code packaged a secret into a public bundle. Such an API key leak has become one of the industry’s biggest challenges.

Artificial intelligence has radically accelerated development. But large language models have a fundamental blind spot, as documented in the Garry Tan audit case: they optimize code for the current file, completely ignoring the architectural difference between a secure server and a public client. When you ask an AI to “quickly connect payments” or “fetch data from the database from the frontend,” it generates working code. But it also silently leaks your secrets. If you don’t audit your client-side code, losing control over your Next.js or Vite infrastructure is only a matter of time.


Fact-Check: Thousands of Google Cloud Keys Exposed

🔴 Critical Case Study · Financial Loss · OWASP A02:2021 Cryptographic Failures

The scale of the problem goes beyond simple junior mistakes. In early 2026, security experts at Truffle Security published a startling study. They scanned the source code of public websites and found 2,863 active Google Cloud keys embedded directly in client-side JavaScript (usually starting with the AIza prefix).

The scariest part of this case is “Retroactive Privilege Expansion.” For years, Google recommended using these keys for secure public services like Google Maps. Developers confidently embedded them in the frontend because they didn’t grant access to private data. But when projects started mass-enabling Gemini AI APIs, all those old “public” keys automatically gained rights to run heavy inference models. A harmless map key turned into an unlimited credit card for hackers parsing these keys from JS bundles.

This is further confirmed by GitGuardian’s “State of Secrets Sprawl 2026” report: in 2025 alone, 28.65 million new hardcoded secrets were leaked to public GitHub repositories (a 34% increase). Notably, commits created with AI tools leak exactly twice as often as human-written code. AI doesn’t care about security; its goal is to make the code work right now.


Anatomy of a Leak

🔴 Critical · Infrastructure Compromise · OWASP A02:2021 Cryptographic Failures

Why do AI assistants leak data so easily? It comes down to how modern frontend frameworks (Next.js, Vite) manage environment variables. To make a variable accessible in the browser, frameworks require a specific prefix: NEXT_PUBLIC_ for Next.js or VITE_ for Vite.

During vibe-coding, a classic scenario occurs:

  1. You ask an AI to write a client component that fetches data from a third-party API (e.g., OpenAI or Stripe).
  2. The AI writes a fetch using standard process.env.SECRET_KEY.
  3. In the browser, the request fails because process.env.SECRET_KEY is undefined on the client. You paste the error back into the chat.
  4. The AI assistant sees the “variable not found on client” error and happily suggests a solution: “Just add the NEXT_PUBLIC_ prefix to your .env file!”

You do it. The request starts working. The AI saved the day. But under the hood, Webpack or the Vite bundler just hardcoded your secret key as a string literal directly into the compiled .js file.

// ❌ AI-hallucination: Typical generated code leaking into the client bundle
export default function CheckoutButton() {
  const handlePayment = async () => {
    const res = await fetch("https://api.stripe.com/v1/charges", {
      headers: {
        // LEAK: The key will be embedded directly in the minified JS
        Authorization: `Bearer ${process.env.NEXT_PUBLIC_STRIPE_SECRET_KEY}`,
      },
    });
  };
  return <button onClick={handlePayment}>Pay Now</button>;
}

Any visitor can press F12, open the Network or Sources tab, and extract the key. These types of leaks are exactly what WebValid’s bundle scanner is designed to catch before your build hits production.

Vite Projects: The VITE_ Trap

In Vite, the mechanism is similar. Only variables prefixed with VITE_ are exposed to your client-side code. If an AI assistant encounters an undefined error in a Vite component, it will recommend adding the prefix:

// ❌ AI-generated leak in Vite
// .env: VITE_SUPABASE_KEY=your_secret_key
const supabase = createClient(
  process.env.VITE_SUPABASE_URL,
  process.env.VITE_SUPABASE_KEY,
);

While this works, Vite’s build process performs a static replacement. Look at your dist/ folder, and you’ll find: const supabase = createClient("https://xyz.supabase.co", "your_secret_key");

Extra protection — Next.js Taint API:

If you are using Next.js 14+ (App Router), you can use the experimental Taint API to explicitly prevent sensitive objects from being passed to the client. This creates a “hard block” that even AI coding assistants can’t accidentally bypass:

// 1. next.config.js
module.exports = { experimental: { taint: true } };

// 2. server-only-logic.ts
import { experimental_taintObjectReference } from "react";

export function getSecureConfig() {
  const config = { apiKey: process.env.STRIPE_SECRET_KEY };
  experimental_taintObjectReference(
    "Do not pass secret config to client",
    config,
  );
  return config;
}

If the AI tries to pass this config object to a Client Component, React will throw a rendering error, preventing the leak at the source.

// ✅ Fix: Move the request to a Server Action or Route Handler
"use server"; // Code executes strictly on the server

export async function processPayment() {
  const res = await fetch("https://api.stripe.com/v1/charges", {
    headers: {
      // SAFE: The key without a prefix exists only in the Node.js environment
      Authorization: `Bearer ${process.env.STRIPE_SECRET_KEY}`,
    },
  });
}

Fatal AI Assistant Mistakes with Secrets

🟠 High · User Data Loss · OWASP A05:2021 Security Misconfiguration

Blindly trusting AI when setting up infrastructure leads to several common errors observed in thousands of projects.

Creating Public .env.example Files

Assistants often generate a .env.example template for project documentation. But since they were trained on massive amounts of GitHub data, they might inadvertently use real-looking token signatures or even strings you mentioned earlier in the session as “placeholders.”

# ❌ Dangerous AI-generated .env.example
# The AI might use a real-looking key format it recently processed
STRIPE_SECRET_KEY=sk_live_51PZ... # AI might leak a real key here

Firebase and Supabase Configurations

AI frequently generates initialization code directly in client-side files (like root-layout.tsx). It dumps configuration objects there without checking if the backend is protected by Row Level Security (RLS).

// ❌ Dangerous AI code in a Client Component
const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY, // ❌ Exposed without RLS check
);

Confusion Between Server and Client Components

In Next.js, AI often creates a Server Component that reads secrets correctly, then adds an onClick or useState at your request. This forces a 'use client' directive onto the file, and suddenly, those variables that were safe on the server are bundled into the client code.

// ❌ AI refactors this to 'use client' to support the button
"use client";
export default function SecretPage({ data }) {
  // If 'data' contains secrets, it's now in the public JS bundle props
  return <button onClick={() => console.log(data)}>Show Data</button>;
}

Security Under the Hood: How Bundle Checking Works

To understand why automation is necessary, it’s worth looking at exactly how leaks occur. When you run a build, the bundler (Webpack or Turbopack) gathers all dependencies into massive JS files. If an AI assistant “smuggled” a secret in there, it remains as a plain string.

You can see this for yourself by inspecting your build artifacts. This is a fundamental skill every security-conscious developer should have:

  1. Run the build: npm run build.
  2. Navigate to the compiled code folder (.next/static/chunks/ in Next.js or dist/ in Vite).
  3. Use grep or ripgrep to search for known secret signatures:
# Search for signatures: Stripe (sk_), Google Cloud (AIza), JWT (ey...)
grep -r -E "sk_test_|sk_live_|AIza|ey[A-Za-z0-9_-]*\.[A-Za-z0-9_-]*\." .next/static/

Manual audits like this are a great way to “feel” the problem. However, in reality, relying on them is dangerous: regular expressions can have false positives, and the human factor (forgetting to run the script before deployment) is too high. This is where professional automation comes in.


Automated JS Bundle Leak Scanner

Inspecting the terminal and writing regex before every release is something developers often forget in the rush. Vibe-coding assumes speed. Audits should be instant too.

FeatureWebValid Security Scanner
Scan keys in JS bundle✅ Analyzes AST and bundle text for secret patterns
Security headers✅ Checks CSP, HSTS, X-Frame-Options
Broken semantics / ARIA✅ Analyzes rendered HTML
Authorization logic❌ Requires manual business logic review
Source code static analysis❌ Only checks compiled production bundles

Simply compile your project and provide the URL to the public scanner. If AI left Stripe, OpenAI, or Google Cloud keys in the public domain, the scanner won’t just point it out—it will provide a full Markdown report with ai-fix instructions.

AI writes functional code exceptionally well—it just doesn’t know when its architecture took a wrong turn and placed a password in a public folder. Give it an error map, and it will fix everything itself. Learn how to turn these reports into ready-to-paste AI tasks.


Your 5-Point API Security Checklist

Before you push your next generative feature to production, run this quick manual check:

  1. Grep Your Build: Use grep -r "sk_" or grep -r "AIza" in your .next/ or dist/ folder.
  2. Prefix Audit: Scan your .env files. Does any secret have a NEXT_PUBLIC_ or VITE_ prefix?
  3. Interactivity Check: Did the AI add 'use client' to a Server Component that handles env vars?
  4. Fixture Check: Are you serving .env.example or test mocks in your production distribution?
  5. Action Review: Do your Server Actions use getServerSession (or equivalent) to verify ownership?

Get a ready-to-paste AI fix prompt in 20 seconds. Zero config. Free scan.https://webvalid.dev/

Have questions about your audit results? Get in touch


Official Documentation

Was this article helpful?