WebValid
• WebValid Team

The Illusion of Clean Code: Why Linters (ESLint) Don't Save You from AI Logic Errors

ESLint AI Coding Vibe Coding QA Accessibility

Tech Stack: The examples use React and Next.js App Router, but the principles of debugging AI-generated code apply to any modern frontend stack (Vite, Vue, Svelte).

Your project is shining. You’ve just “vibe-coded” a whole batch of components in Cursor or Copilot, and it was incredibly fast. You run npm run lint, and the terminal returns a pristine result: “0 errors, 0 warnings”. You feel like a productivity god. But as soon as it comes to deployment or the first check on staging, everything falls apart: search engines don’t see the pages, screen readers stumble over empty buttons, and the site in Safari looks like a relic from the 90s.

The problem is that we’ve trusted linters too much. ESLint is a luxurious tool for checking syntax and enforcing style, but it is absolutely blind to the result of your code in a real browser. The linter sees the developer’s “intent,” while WebValid sees what the user actually received.

In the era of AI coding, where neural networks generate code in batches, static checking (accessibility testing) has become a dangerous illusion of security. Let’s break down the “Gallery of Blind Spots” where your linter is just a useless decorator.


1. Accessibility Gaps: When alt Exists, but Meaning Doesn’t

🔴 High · Accessibility Failure · WCAG 1.1.1 (Non-text Content)

A linter is a software algorithm; it checks for the presence of an attribute. If you use eslint-plugin-jsx-a11y, it will give a green light as soon as it sees an alt tag. But it cannot evaluate what exactly is written there.

What AI Does (Before): Neural networks often get lazy and generate “technical” descriptions or use filenames as placeholders.

// ❌ The linter thinks this code is perfect (the alt attribute is formally present)
<img src="/assets/hero-bg.jpg" alt="image" />
<img src="/icons/checkmark.svg" alt="checkmark_icon_final_v2.svg" />

The Problem: For a screen reader, alt="image" is just noise, and reading a filename is nonsense. The linter is “satisfied,” but your UI remains inaccessible.

It’s important to understand: automation tools (including Axe-core) cover at most 20-30% of WCAG issues. They don’t “understand” the meaning of an image (they won’t distinguish a tank from a dog), but they are excellent at finding technical anti-patterns: empty links, duplicate descriptions, or the use of reserved words (“photo”, “image”) that AI loves to substitute by default.

How WebValid Catches It (The Truth): Runtime audit analyzes the final DOM, not the source code. It finds these “garbage” placeholders and highlights areas that require manual or AI semantic review. It’s an effective filter that cuts out technical defects before you call in testers.

Read more about how AI breaks markup semantics in our deep dive “Top 7 AI Accessibility Mistakes”.


🟡 Medium · Traffic Loss · SEO Health

AI is great at writing local files, but it has no idea about the structure of your site on staging or in production. It can generate perfect navigation that leads nowhere.

What AI Does (Before): You ask it to “make a menu with pricing and features.” AI generates:

// ❌ The code is syntactically correct; the linter is happy
<Link href="/pricing">Pricing</Link>
<a href="#features">Our Features</a>

The Problem: The /pricing page might have been deleted yesterday or renamed to /plans. The #features anchor might be missing in the final DOM because the AI named the section id="our-features". The linter will never know about a 404 error or a “dead” transition until you click it yourself.

How WebValid Catches It (The Truth): Network Scanner and Sitemap Scanner literally follow every link on the rendered page. They will discover that the /pricing page returned a 404, and the #features anchor is not attached to any element. This is a check of “external truth” that is unavailable to static analysis.


3. SSL Certificate Expired, and You’re the Last to Know

🔴 Critical · Site Blocked · OWASP A05:2021 (Security Misconfiguration)

There is a category of bugs that don’t live in the code at all. They live in server configs, and no ESLint can reach them.

What AI Does (Before): AI suggests connecting a useful script or font via a hardcoded link:

<!-- ❌ Everything is clean in the code -->
<script src="http://cdn.example.com/analytics.js"></script>

The Problem: If your site is on HTTPS, the browser will block this script as Mixed Content. Moreover, your Content Security Policy (CSP) might prohibit loading scripts from external domains. The linter only sees a text file and says “Ok.” As a result, the analytics button doesn’t work, and a grey “Not Secure” icon glows in the browser. Or worse, your SSL certificate expires in 2 days—the linter will stay silent.

How WebValid Catches It (The Truth): SSL Scanner and Security Scanner check the server response. They see that the certificate is about to “go bad” and the browser is complaining about insecure resources. This is the level of “truth” that only manifests on a live address.


4. Jumpy Layout: Why CLS Kills Conversion

🟡 High · User Annoyance · Core Web Vitals (CLS)

In modern frameworks like Next.js, linters have indeed learned to catch missing image dimensions (requiring width and height for the <Image> component). But a “green” linter still doesn’t guarantee a stable layout in the browser.

What AI Does (Before): AI might correctly use framework components, but it doesn’t see the loading context of the entire page. It can generate perfect code that still “jumps” due to external factors.

The Problem: Cumulative Layout Shift (CLS) is a runtime metric. Your linter is absolutely blind to the fact that:

The linter checks props, while WebValid measures the real experience. Even if your code passes all next/image checks, the final CLS might be red due to infrastructure bugs visible only in the browser.

How WebValid Catches It (The Truth): Lighthouse Scanner measures visual stability in real-time. It “lives through” the page load as a real user and records every layout shift, issuing a score based on what the user saw, not what the developer wrote.

Want to know more about visual bugs? Check out our guide “Top 5 AI CSS Mistakes”.

Is your layout “jumping”? Measure real CLS in the browser right now.


5. Competing IDs: When Components Break Each Other

🟡 Medium · Broken Functionality · HTML5 Validity

AI works in a limited context. When you ask it to create a “form component,” it creates it in isolation.

What AI Does (Before): You created a ContactForm and an AuthForm. In both cases, the AI used a typical ID for the button:

// File ContactForm.tsx
<button id="submit-btn">Submit</button>

// File AuthForm.tsx
<button id="submit-btn">Login</button>

The Problem: The linter checks each file separately. For it, id="submit-btn" inside one file is normal. But when you output both forms on the same page (e.g., in a landing page), your DOM becomes invalid. Two elements with the same ID is a disaster for JavaScript logic and screen readers. A screen reader will simply “lose” one of the buttons, and your document.getElementById script will return the wrong element.

How WebValid Catches It (The Truth): HTML Syntax Scanner analyzes the final rendered HTML code of the entire page. It instantly detects duplicate IDs that were “invisible” to the linter while the components lived in different files.

We wrote about how “hallucinations” in the DOM hinder product growth here.


6. Hydration Errors: When Server and Browser Disagree

🔴 High · White Screen · React Hydration Error

This is the most insidious bug of the modern frontend. AI loves to use checks like typeof window !== 'undefined'.

What AI Does (Before): The neural network tries to adapt the code for the browser:

// ❌ Linter sees valid JS
const isMobile = typeof window !== "undefined" && window.innerWidth < 768;

return <div>{isMobile ? "Mobile View" : "Desktop View"}</div>;

The Problem: On the server (SSR), there is no window, so ‘Desktop View’ will render. But as soon as the page loads in the browser, React will see that it’s actually ‘Mobile View’. A Hydration Mismatch occurs. At best, your layout will “flicker”; at worst, the whole app will crash with an error, leaving the user with a white screen. The linter considers this code safe because it’s syntactically logical.

How WebValid Catches It (The Truth): WebValid runs your project in a real Headless browser. If a hydration error or a runtime exception pops up in the console, Network Scanner (in console audit mode) records it as a critical bug.


7. Leaky Bundle: Your API Keys in Public Access

🔴 Critical · Data Leak · OWASP A01:2021 (Broken Access Control)

This is the most dangerous bug that AI can “gift” to your project. When generating code, neural networks often use placeholders or ask you to insert keys directly “just to check.”

What AI Does (Before): You ask the AI to add Stripe or Firebase integration. The AI generates a client config and obligingly inserts a secret key or token that should only live on the server.

// ❌ The linter sees a valid JS object. It doesn't know it's a secret.
export const stripeConfig = {
  publicKey: "pk_live_...",
  secret_key: "sk_live_...", // ⚠️ CRITICAL LEAK
};

The Problem: The linter checks syntax. For it, any string is just a string. But as soon as this code enters the bundle, your secret key becomes available to anyone who opens DevTools. Attackers use bots to scan JS files for sk_live, aws_key, and other patterns. The result: drained accounts and compromised user data.

How WebValid Catches It (The Truth): Security Scanner analyzes the final compiled bundle that the browser receives, not the source files. It uses a database of thousands of patterns from known services and instantly raises an alarm if a secret token or a “forgotten” debug mode is found in the public code.

Analysis of how Next.js and Vite bundles reveal secrets is available in the article “Leaked API Keys”.


WebValid vs Lighthouse: Why Not Just “axe in CI”?

A technical reader might ask: “Why do I need a separate service if there’s Lighthouse and axe DevTools?”. The answer lies in the entry barrier and scope:

  1. Reality vs Lab: Lighthouse in CI checks a “sterile” version. WebValid checks the live address with all its network latencies, expired SSLs, and fallen CDNs.
  2. Zero-Config: To integrate axe into CI, you need to write YAML configs, set up runners, and store logs. In WebValid, you just enter the URL and get a Markdown report ready for a prompt in Cursor.
  3. 24/7 Monitoring: Linters and CI only work at the time of commit. A certificate could expire, or an API key could leak through a third-party script at any time. WebValid monitors this constantly.

Fact-Check: The Accessibility Debt Trap

To understand that these aren’t just empty scare stories, let’s look at the numbers. Linters are widespread in modern development, but this doesn’t correlate with accessibility quality—which is indirectly confirmed by the WebAIM Million 2024 report, which analyzed the top 1 million home pages:

Almost all of these sites pass standard npm run lint checks. A linter knows how to slap your wrist for an extra space, but it allows you to release digital junk into the world that is impossible to use.


How It Works Together: ESLint + WebValid

You don’t need to give up linters. They are your first line of defense. But they cannot be the last.

FeatureESLint (Static)WebValid (Runtime)
Syntax Errors✅ Catches instantly❌ Not for this
alt Text Quality❌ Only checks presence✅ Evaluates meaning and utility
SSL and Certificates❌ Blind✅ Monitors 24/7
Broken links (404)❌ Blind✅ Checks every URL
Layout Shift (CLS)❌ Blind✅ Measures in browser
Hydration Errors❌ Blind✅ Catches in console
Key leaks in bundle⚠️ Only patterns in code✅ Scans final build

Implementing WebValid in Your Vibe-Coding

Web audit in 2026 is not a boring PDF for a manager. For a solo developer and “vibe-coder,” it is a precise prompt for fixing bugs that a linter cannot even notice.

When to Run an Audit?

What You Get in 60 Seconds: You enter the staging URL and receive a Markdown report. No need to configure anything, write YAML configs, or wait for CI pipeline completion. The report contains ready-to-use ai-fix instructions: just copy them and paste them into Cursor or ChatGPT.

Your New Quality Checklist:

  1. Lint: ESLint brushes up the syntax (5 sec).
  2. Deploy: Code flies to Vercel/Netlify Preview (30 sec).
  3. Audit: WebValid checks the “live truth” (60 sec).
  4. Fix: You feed the Markdown report to your AI and fix everything in one prompt.

Digital cleanliness is when the linter is green in the editor, and WebValid confirms that your product is accessible, secure, and fast in the real world.

Don’t guess what your AI broke today. Get a deterministic list of logic and runtime bugs of your site right now. First report is free, no registration or complex settings required.

→ Run Audit at webvalid.dev


Official Resources and Documentation

Was this article helpful?