The Illusion of Clean Code: Why Linters (ESLint) Don't Save You from AI Logic Errors
Tech Stack: The examples use React and Next.js App Router, but the principles of debugging AI-generated code apply to any modern frontend stack (Vite, Vue, Svelte).
Your project is shining. Youâve just âvibe-codedâ a whole batch of components in Cursor or Copilot, and it was incredibly fast. You run npm run lint, and the terminal returns a pristine result: â0 errors, 0 warningsâ. You feel like a productivity god. But as soon as it comes to deployment or the first check on staging, everything falls apart: search engines donât see the pages, screen readers stumble over empty buttons, and the site in Safari looks like a relic from the 90s.
The problem is that weâve trusted linters too much. ESLint is a luxurious tool for checking syntax and enforcing style, but it is absolutely blind to the result of your code in a real browser. The linter sees the developerâs âintent,â while WebValid sees what the user actually received.
In the era of AI coding, where neural networks generate code in batches, static checking (accessibility testing) has become a dangerous illusion of security. Letâs break down the âGallery of Blind Spotsâ where your linter is just a useless decorator.
1. Accessibility Gaps: When alt Exists, but Meaning Doesnât
đ´ High ¡ Accessibility Failure ¡ WCAG 1.1.1 (Non-text Content)
A linter is a software algorithm; it checks for the presence of an attribute. If you use eslint-plugin-jsx-a11y, it will give a green light as soon as it sees an alt tag. But it cannot evaluate what exactly is written there.
What AI Does (Before): Neural networks often get lazy and generate âtechnicalâ descriptions or use filenames as placeholders.
// â The linter thinks this code is perfect (the alt attribute is formally present)
<img src="/assets/hero-bg.jpg" alt="image" />
<img src="/icons/checkmark.svg" alt="checkmark_icon_final_v2.svg" />
The Problem:
For a screen reader, alt="image" is just noise, and reading a filename is nonsense. The linter is âsatisfied,â but your UI remains inaccessible.
Itâs important to understand: automation tools (including Axe-core) cover at most 20-30% of WCAG issues. They donât âunderstandâ the meaning of an image (they wonât distinguish a tank from a dog), but they are excellent at finding technical anti-patterns: empty links, duplicate descriptions, or the use of reserved words (âphotoâ, âimageâ) that AI loves to substitute by default.
How WebValid Catches It (The Truth): Runtime audit analyzes the final DOM, not the source code. It finds these âgarbageâ placeholders and highlights areas that require manual or AI semantic review. Itâs an effective filter that cuts out technical defects before you call in testers.
Read more about how AI breaks markup semantics in our deep dive âTop 7 AI Accessibility Mistakesâ.
2. Dead Links: What the âGreen Linterâ Doesnât See
đĄ Medium ¡ Traffic Loss ¡ SEO Health
AI is great at writing local files, but it has no idea about the structure of your site on staging or in production. It can generate perfect navigation that leads nowhere.
What AI Does (Before): You ask it to âmake a menu with pricing and features.â AI generates:
// â The code is syntactically correct; the linter is happy
<Link href="/pricing">Pricing</Link>
<a href="#features">Our Features</a>
The Problem:
The /pricing page might have been deleted yesterday or renamed to /plans. The #features anchor might be missing in the final DOM because the AI named the section id="our-features". The linter will never know about a 404 error or a âdeadâ transition until you click it yourself.
How WebValid Catches It (The Truth):
Network Scanner and Sitemap Scanner literally follow every link on the rendered page. They will discover that the /pricing page returned a 404, and the #features anchor is not attached to any element. This is a check of âexternal truthâ that is unavailable to static analysis.
3. SSL Certificate Expired, and Youâre the Last to Know
đ´ Critical ¡ Site Blocked ¡ OWASP A05:2021 (Security Misconfiguration)
There is a category of bugs that donât live in the code at all. They live in server configs, and no ESLint can reach them.
What AI Does (Before): AI suggests connecting a useful script or font via a hardcoded link:
<!-- â Everything is clean in the code -->
<script src="http://cdn.example.com/analytics.js"></script>
The Problem: If your site is on HTTPS, the browser will block this script as Mixed Content. Moreover, your Content Security Policy (CSP) might prohibit loading scripts from external domains. The linter only sees a text file and says âOk.â As a result, the analytics button doesnât work, and a grey âNot Secureâ icon glows in the browser. Or worse, your SSL certificate expires in 2 daysâthe linter will stay silent.
How WebValid Catches It (The Truth): SSL Scanner and Security Scanner check the server response. They see that the certificate is about to âgo badâ and the browser is complaining about insecure resources. This is the level of âtruthâ that only manifests on a live address.
4. Jumpy Layout: Why CLS Kills Conversion
đĄ High ¡ User Annoyance ¡ Core Web Vitals (CLS)
In modern frameworks like Next.js, linters have indeed learned to catch missing image dimensions (requiring width and height for the <Image> component). But a âgreenâ linter still doesnât guarantee a stable layout in the browser.
What AI Does (Before): AI might correctly use framework components, but it doesnât see the loading context of the entire page. It can generate perfect code that still âjumpsâ due to external factors.
The Problem: Cumulative Layout Shift (CLS) is a runtime metric. Your linter is absolutely blind to the fact that:
- A third-party script (chat, cookie banner, or ad) injects a block into the middle of the page 2 seconds after loading.
- A custom font loaded with a delay and âredrewâ all headers, shifting the text down.
- Dynamic content (e.g., a promotional banner) came from an API without a reserved space in CSS.
The linter checks props, while WebValid measures the real experience. Even if your code passes all next/image checks, the final CLS might be red due to infrastructure bugs visible only in the browser.
How WebValid Catches It (The Truth): Lighthouse Scanner measures visual stability in real-time. It âlives throughâ the page load as a real user and records every layout shift, issuing a score based on what the user saw, not what the developer wrote.
Want to know more about visual bugs? Check out our guide âTop 5 AI CSS Mistakesâ.
Is your layout âjumpingâ? Measure real CLS in the browser right now.
5. Competing IDs: When Components Break Each Other
đĄ Medium ¡ Broken Functionality ¡ HTML5 Validity
AI works in a limited context. When you ask it to create a âform component,â it creates it in isolation.
What AI Does (Before):
You created a ContactForm and an AuthForm. In both cases, the AI used a typical ID for the button:
// File ContactForm.tsx
<button id="submit-btn">Submit</button>
// File AuthForm.tsx
<button id="submit-btn">Login</button>
The Problem:
The linter checks each file separately. For it, id="submit-btn" inside one file is normal. But when you output both forms on the same page (e.g., in a landing page), your DOM becomes invalid. Two elements with the same ID is a disaster for JavaScript logic and screen readers. A screen reader will simply âloseâ one of the buttons, and your document.getElementById script will return the wrong element.
How WebValid Catches It (The Truth): HTML Syntax Scanner analyzes the final rendered HTML code of the entire page. It instantly detects duplicate IDs that were âinvisibleâ to the linter while the components lived in different files.
We wrote about how âhallucinationsâ in the DOM hinder product growth here.
6. Hydration Errors: When Server and Browser Disagree
đ´ High ¡ White Screen ¡ React Hydration Error
This is the most insidious bug of the modern frontend. AI loves to use checks like typeof window !== 'undefined'.
What AI Does (Before): The neural network tries to adapt the code for the browser:
// â Linter sees valid JS
const isMobile = typeof window !== "undefined" && window.innerWidth < 768;
return <div>{isMobile ? "Mobile View" : "Desktop View"}</div>;
The Problem:
On the server (SSR), there is no window, so âDesktop Viewâ will render. But as soon as the page loads in the browser, React will see that itâs actually âMobile Viewâ. A Hydration Mismatch occurs. At best, your layout will âflickerâ; at worst, the whole app will crash with an error, leaving the user with a white screen. The linter considers this code safe because itâs syntactically logical.
How WebValid Catches It (The Truth): WebValid runs your project in a real Headless browser. If a hydration error or a runtime exception pops up in the console, Network Scanner (in console audit mode) records it as a critical bug.
7. Leaky Bundle: Your API Keys in Public Access
đ´ Critical ¡ Data Leak ¡ OWASP A01:2021 (Broken Access Control)
This is the most dangerous bug that AI can âgiftâ to your project. When generating code, neural networks often use placeholders or ask you to insert keys directly âjust to check.â
What AI Does (Before): You ask the AI to add Stripe or Firebase integration. The AI generates a client config and obligingly inserts a secret key or token that should only live on the server.
// â The linter sees a valid JS object. It doesn't know it's a secret.
export const stripeConfig = {
publicKey: "pk_live_...",
secret_key: "sk_live_...", // â ď¸ CRITICAL LEAK
};
The Problem:
The linter checks syntax. For it, any string is just a string. But as soon as this code enters the bundle, your secret key becomes available to anyone who opens DevTools. Attackers use bots to scan JS files for sk_live, aws_key, and other patterns. The result: drained accounts and compromised user data.
How WebValid Catches It (The Truth): Security Scanner analyzes the final compiled bundle that the browser receives, not the source files. It uses a database of thousands of patterns from known services and instantly raises an alarm if a secret token or a âforgottenâ debug mode is found in the public code.
Analysis of how Next.js and Vite bundles reveal secrets is available in the article âLeaked API Keysâ.
WebValid vs Lighthouse: Why Not Just âaxe in CIâ?
A technical reader might ask: âWhy do I need a separate service if thereâs Lighthouse and axe DevTools?â. The answer lies in the entry barrier and scope:
- Reality vs Lab: Lighthouse in CI checks a âsterileâ version. WebValid checks the live address with all its network latencies, expired SSLs, and fallen CDNs.
- Zero-Config: To integrate axe into CI, you need to write YAML configs, set up runners, and store logs. In WebValid, you just enter the URL and get a Markdown report ready for a prompt in Cursor.
- 24/7 Monitoring: Linters and CI only work at the time of commit. A certificate could expire, or an API key could leak through a third-party script at any time. WebValid monitors this constantly.
Fact-Check: The Accessibility Debt Trap
To understand that these arenât just empty scare stories, letâs look at the numbers. Linters are widespread in modern development, but this doesnât correlate with accessibility qualityâwhich is indirectly confirmed by the WebAIM Million 2024 report, which analyzed the top 1 million home pages:
- 95.9% of sites have at least one critical WCAG 2 error.
- On 81% of sites, text is not contrasty (linters donât see this without computing CSS).
- On 54% of sites,
altattributes are missing or useless. - On 48% of sites, forms are not linked to labels.
Almost all of these sites pass standard npm run lint checks. A linter knows how to slap your wrist for an extra space, but it allows you to release digital junk into the world that is impossible to use.
How It Works Together: ESLint + WebValid
You donât need to give up linters. They are your first line of defense. But they cannot be the last.
| Feature | ESLint (Static) | WebValid (Runtime) |
|---|---|---|
| Syntax Errors | â Catches instantly | â Not for this |
alt Text Quality | â Only checks presence | â Evaluates meaning and utility |
| SSL and Certificates | â Blind | â Monitors 24/7 |
| Broken links (404) | â Blind | â Checks every URL |
| Layout Shift (CLS) | â Blind | â Measures in browser |
| Hydration Errors | â Blind | â Catches in console |
| Key leaks in bundle | â ď¸ Only patterns in code | â Scans final build |
Implementing WebValid in Your Vibe-Coding
Web audit in 2026 is not a boring PDF for a manager. For a solo developer and âvibe-coder,â it is a precise prompt for fixing bugs that a linter cannot even notice.
When to Run an Audit?
- Before every deploy to production (sanity check).
- When receiving strange bugs from users that donât reproduce locally.
- When an AI assistant dynamic-generates too much code and you lose control over semantics.
What You Get in 60 Seconds:
You enter the staging URL and receive a Markdown report. No need to configure anything, write YAML configs, or wait for CI pipeline completion. The report contains ready-to-use ai-fix instructions: just copy them and paste them into Cursor or ChatGPT.
Your New Quality Checklist:
- Lint: ESLint brushes up the syntax (5 sec).
- Deploy: Code flies to Vercel/Netlify Preview (30 sec).
- Audit: WebValid checks the âlive truthâ (60 sec).
- Fix: You feed the Markdown report to your AI and fix everything in one prompt.
Digital cleanliness is when the linter is green in the editor, and WebValid confirms that your product is accessible, secure, and fast in the real world.
Donât guess what your AI broke today. Get a deterministic list of logic and runtime bugs of your site right now. First report is free, no registration or complex settings required.