Blind Code: Top 7 Critical Accessibility Errors AI Assistants Always Make
Stack Context: The examples in this article apply to React, Next.js (App Router), and Vite projects. However, the accessibility principles apply to any framework (Vue, Svelte, Angular).
You’ve just “vibe-coded” a stunning new landing page in seconds using your favorite AI assistant. Cursor or Copilot generated the entire UI from a single prompt. It looks great on your monitor, the animations are smooth, and the deployment went off without a hitch. But under the hood, you might have just blocked users relying on screen readers or keyboard navigation from accessing your site.
When AI models generate code, they prioritize visual aesthetics and immediate functionality. They lack the contextual nuance to understand semantic structure, meaning they frequently break accessibility testing suites. If you don’t check their output, you are shipping a product that is invisible to screen readers and impossible to navigate with a keyboard.
Here are the top 7 critical accessibility errors AI-generated code introduces, and how to spot them before your next deployment.
1. “Div Soup” and Phantom Buttons
🔴 Critical · Unreachable by Screen Readers · WCAG 4.1.2 (Name, Role, Value)
If you ask an AI assistant to “create a custom dropdown,” its favorite solution is to build a massive stack of <div> elements and slap an onClick event on them. We call this “Div Soup.”
The AI’s Approach (Before):
// ❌ AI-Generated Code: Visually works, completely inaccessible
<div onClick={() => setIsOpen(true)}>Open Menu</div>
Why it’s broken:
A <div> is inherently non-interactive. When a screen reader encounters it, it just reads “Open Menu” without announcing that it’s a clickable action. Furthermore, a user navigating via the Tab key will bypass this element entirely, making your site unusable for anyone who cannot use a mouse.
The Fix (After):
// ✅ Correct Code: Native button provides built-in accessibility
<button onClick={() => setIsOpen(true)}>Open Menu</button>
Native elements like <button> and <a> come with built-in state management for focus, keyboard bindings (Enter and Space), and accessibility roles. If you must use a custom <div>, you must manually add role="button" and tabIndex={0}, but it’s almost always better to force the AI to use semantic HTML.
Having trouble tracking down all the rogue divs in your generated components? Check out our guide on Markdown-Driven QA to see how you can automate this check.
2. The Unlabeled Form Input
🟡 High · Form Abandonment · WCAG 1.3.1 (Info and Relationships)
Forms are the lifeblood of any SaaS product. When generating a signup form, AI will usually create a beautiful layout using placeholder text as the only form of label.
The AI’s Approach (Before):
// ❌ AI-Generated Code: No programmatic relationship
<div>
<span>Email Address</span>
<input
type="email"
placeholder="Enter your email"
/>
</div>
Why it’s broken: Visual grouping isn’t enough. Screen readers need a direct programmatic link between a label and its input. Without it, the user moves to the input field and the screen reader simply announces “Edit text, blank.” They have no idea what data they are supposed to enter.
The Fix (After):
// ✅ Correct Code: Programmatically linked label
<div>
<label htmlFor="email-input">Email Address</label>
<input
id="email-input"
type="email"
placeholder="john@example.com"
/>
</div>
Always instruct your AI to “use properly linked <label> tags with htmlFor properties for all inputs.”
3. The Modal Keyboard Trap
🔴 Critical · User Trapped · WCAG 2.1.2 (No Keyboard Trap)
Modals and dialogs are notoriously difficult to get right. When an AI generates a modal from scratch, it almost never implements proper focus management.
When a modal opens, keyboard focus must be trapped within the modal so the user doesn’t accidentally tab into the obscured background content. However, the AI often forgets to provide a way out. If there’s no defined close button mapping to the Esc key, a keyboard-only user is officially “trapped” on your page and will have to refresh the browser to escape.
The Fix:
Stop letting the AI write complex modal logic from scratch. Instruct it to use the native HTML <dialog> element, which modern browsers support natively. It handles focus trapping and the Esc key automatically.
// ✅ Correct Code: Native dialog using useImperativeHandle for robust behavior
import { useRef, useImperativeHandle, forwardRef } from "react";
export const NewsletterModal = forwardRef((props, ref) => {
const dialogRef = useRef<HTMLDialogElement>(null);
// Expose safe imperative methods to the parent
useImperativeHandle(ref, () => ({
open: () => dialogRef.current?.showModal(),
close: () => dialogRef.current?.close(),
}));
return (
<dialog ref={dialogRef}>
<h2>Subscribe to Newsletter</h2>
{/* Content */}
<button onClick={() => dialogRef.current?.close()}>Close</button>
</dialog>
);
});
// Parent component usage:
// const modalRef = useRef(null);
// <button onClick={() => modalRef.current?.open()}>Open</button>
// <NewsletterModal ref={modalRef} />
4. Invisible Focus Indicators
🟡 High · Navigation Confusion · WCAG 2.4.13 (Focus Appearance)
AI assistants love to generate ultra-minimalist designs. A common “feature” of AI-generated CSS is stripping out the default browser focus rings because they “look ugly.”
// ❌ AI-Generated Code: Removes focus rings
<button style={{ outline: "none" }}>Submit</button>
Why it’s broken:
If you remove the outline, keyboard navigators have no visual indicator of where they are on the page. The WCAG 2.2 standards mandate a high-contrast focus indicator (at least a 3:1 contrast ratio) that is clearly visible. If your AI removes the default outline, tell it to replace it with a styled one using the :focus-visible pseudo-class.
If your generated styles are causing performance bottlenecks on top of accessibility issues, read our breakdown of the Top 5 AI CSS Mistakes.
5. Empty or Hallucinated Alt-Text
🟠 Medium · Broken Context · WCAG 1.1.1 (Non-text Content)
AI lacks the business context of your application. When it generates an <img> tag, it will either leave the alt text blank, or wildly hallucinate based on the variable name.
// ❌ AI-Generated Code: Useless or missing context
<img src="/assets/hero-bg.jpg" alt="image" />
<img src="/icons/checkmark.svg" alt="Checkmark icon vector graphic" />
Why it’s broken:
If an image is purely decorative (like a background pattern or a generic icon next to text), it must have an empty alt attribute (alt=""). This tells the screen reader to safely ignore it. If it is informative (like a chart), it needs a precise description. AI cannot make this distinction for you. You must manually audit alt tags to ensure they provide actual value.
6. Illegal DOM Nesting (The A11y Tree Breaker)
🔴 Critical · Broken Parsing · HTML5 Syntax Error
When you prompt an AI to “make a button that routes to checkout,” it often generates illegally nested HTML—like placing a <button> tag inside an <a> tag.
The AI’s Approach (Before):
// ❌ AI-Generated Code: Illegal nesting fails HTML parsers
<a href="/checkout">
<button onClick={trackEvent}>Buy Now</button>
</a>
Why it’s broken: Modern browsers attempt to auto-correct illegal syntax, which completely corrupts the Accessibility Tree. When a screen reader hits this block, it gets confused by a nested interactive element. It will either announce it twice, skip it entirely, or trap the keyboard focus in a loop.
The Fix (After):
// ✅ Correct Code: Semantic link styled as a button
import Link from "next/link";
<Link
href="/checkout"
className="btn-primary"
onClick={() => {
trackEvent("checkout_clicked");
}}
>
Buy Now
</Link>;
To catch these errors early, use an HTML Syntax scanner to statically analyze the rendered DOM structure for invalid nesting rules.
7. The “Aria-Hidden” Nuke
🔴 Critical · Invisible Content · WCAG 1.3.1 (Info and Relationships)
Axe-core scanners often catch a devastating AI-hubris mistake: the overcorrection. If you ask an AI to “fix the screen reader issue on the background element,” it will frequently slap an aria-hidden="true" on the parent wrapper container.
The AI’s Approach (Before):
// ❌ AI-Generated Code: Silences the entire application
<main aria-hidden="true">
<div className="decorative-background" />
<form>
<label htmlFor="email">Email</label>
<input
id="email"
type="email"
/>
<button>Submit</button>
</form>
</main>
Why it’s broken:
When aria-hidden="true" is applied to a parent, every child element inside it is wiped from the Accessibility Tree. The entire form becomes completely invisible to blind users, even if the inputs themselves are perfectly labeled. The visual UI looks untouched, making this bug almost impossible to catch without an Axe-Core audit.
The Fix (After):
// ✅ Correct Code: Hiding only the decorative element
<main>
<div
aria-hidden="true"
className="decorative-background"
/>
<form>
<label htmlFor="email">Email</label>
<input
id="email"
type="email"
/>
<button>Submit</button>
</form>
</main>
Fact-Check: The Accessibility Debt Crisis
To understand why relying solely on AI is dangerous, we must look at the current state of the web. The WebAIM Million 2024 Report analyzed the top 1 million home pages and found that 95.9% of them had detectable WCAG 2 failures.
The most common failures match exactly with the code AI tends to hallucinate:
- Low contrast text (81%)
- Missing alternative text for images (54%)
- Missing form input labels (48%)
When you vibe-code without auditing, you aren’t just making a mistake—you are directly contributing to the largest accessibility gap in the industry.
WebValid Audit Capabilities
AI models are incredible at writing code, but they are terrible at seeing the final rendered DOM. You can’t just paste code into ChatGPT and ask “Is this accessible?” because a standalone React component doesn’t show how it behaves in the browser.
WebValid bridges this gap by scanning the fully rendered HTML and acting as a technical translator for your AI copilot.
| Feature Area | WebValid Capability | Limitations |
|---|---|---|
| Broken Semantics / ARIA | ✅ Checks fully rendered HTML output. | Cannot determine if alt-text context is business-accurate. |
| Focus Rings | ✅ Verifies existence of CSS focus states. | Cannot determine if subjective contrast ratio is visually pleasing. |
| Form Labels | ✅ Validates programmatic links (htmlFor). | Cannot determine if the label text makes logical sense. |
WebValid uses advanced headless browser testing to mimic how a real screen reader parses your DOM, finding the errors your AI assistant left behind.
Your Accessibility Testing Checklist
Before deploying AI-generated code, run through this quick list:
- The Tab Test: Unplug your mouse and try to navigate your site using only
Tab,Shift + Tab, andEnter. - The Modal Test: Open every modal/dropdown and press
Esc. Does it close? - The Screen Reader Test: Turn on VoiceOver (Mac) or NVDA (Windows) and close your eyes. Can you fill out your primary form?
Your AI co-pilot can write excellent code—it just doesn’t know where it went wrong. If you give it a map of errors, it can fix everything itself. Don’t guess if your site is compliant. Get a deterministic audit of your rendered DOM, convert it into an AI fix-prompt, and solve these issues in minutes.