Skip to main content
2026-04-249 min readnavable Team
MCP ServerAgent SkillsWCAG 2.1Accessibility TestingBFSGAI Developer ToolsOpen Sourceaxe-core

Automate Web Accessibility with AI: How Agents Use navable MCP for Your WCAG Audits

Two free tools that bring automated accessibility testing into the place where code gets written. No dashboards. No browser extensions. Just your IDE and an AI agent.


The Gap in the Developer Toolchain

Developers have linters for code quality, formatters for consistency, and type checkers for correctness. These tools run in the editor, catch problems early, and integrate into CI pipelines.

For accessibility, the story looks different. The typical workflow is:

  1. Build a feature
  2. Ship it
  3. Weeks later, receive a PDF from an auditor listing 40+ findings
  4. Go back and fix things in code you've half-forgotten

This disconnect — building in one place, testing in another, fixing much later — is why accessibility debt accumulates. It's not that developers don't care. It's that the feedback loop is too slow and the tooling doesn't meet them where they work.

With the BFSG (Barrierefreiheitsstärkungsgesetz) in effect since June 2025, requiring WCAG 2.1 Level AA for digital services in Germany, and the European Accessibility Act extending similar rules across the EU, this gap is becoming a legal risk, not just a UX concern.

We built two open-source tools to close it.

What We're Releasing

Each tool is useful on its own — but they're designed to work together. The MCP server provides the scanning engine. The agent skills provide structured, step-by-step instructions. When both are set up, your AI agent follows a deterministic, repeatable workflow — scan, plan, fix, verify — instead of improvising solutions. Same input, same process, every time.

1. @navable/mcp — An MCP Server for Accessibility Scanning

The Model Context Protocol (MCP) is an open standard that lets AI coding agents call external tools. Our MCP server connects your agent to a real Chromium browser running axe-core, the industry-standard accessibility testing engine.

It provides three tools:

  • run_accessibility_scan — Scans a URL against WCAG 2.1 Level A + AA rules. Returns structured violations with severity, affected elements, WCAG success criteria, and EN 301 549 clause mapping.
  • generate_fix_plan — Converts scan results into a prioritized .navable-plan.json file. Critical issues first, minor items last. Each entry includes the rule violated, affected nodes, and a fix description.
  • update_fix_status — Tracks progress as items move from pending to done. The plan file becomes a living document your team can reference.

Works with Cursor, VS Code (GitHub Copilot), and Claude Code.

npm: @navable/mcp | GitHub: web-DnA/navable-web-accessibility-mcp

2. navable Agent Skills — Workflow Instructions for AI Coding Agents

Agent skills are structured instructions that teach AI agents specific workflows. Copy them into your project from the GitHub repository. Ours cover four scenarios:

SkillWhat It Does
scan-accessibilityFull workflow: scan a URL → generate fix plan → apply fixes → re-scan to verify
fix-accessibilityResume an existing .navable-plan.json and work through pending items
review-componentReview a component's source code for accessibility issues — no browser needed
audit-page-structureCheck landmarks, heading hierarchy, skip links, and document metadata

The scan skill includes 10 fix guides covering images, forms, color contrast, ARIA, keyboard navigation, headings, landmarks, language, navigation, and tables — with before/after code examples for 55 common violations.

GitHub: web-DnA/navable-web-accessibility-skills

How It Works in Practice

With both tools configured, the workflow becomes deterministic. The agent doesn't guess what to do next — the skills define the exact sequence, and the MCP server executes each step.

A developer starts their dev server and types into their IDE:

Scan http://localhost:3000 for accessibility issues

Here's what that looks like in practice:

Step 1: Real Browser Scan

The MCP server launches headless Chromium, navigates to the URL, and runs axe-core. This tests the actual rendered page — JavaScript-generated content, dynamic components, CSS-driven visibility — not just static HTML.

Step 2: Structured Results

Every violation comes back with:

  • Severity level — critical, serious, moderate, or minor
  • CSS selectors + HTML snippets — pointing to the exact affected elements
  • WCAG success criterion — e.g., 1.4.3 Contrast (Minimum)
  • EN 301 549 clause — e.g., §9.1.4.3 — the European standard referenced by BFSG

Step 3: Fix Plan Generation

The agent creates .navable-plan.json — a structured, priority-sorted fix plan. Each item tracks:

{
  "id": "fix-1",
  "ruleId": "color-contrast",
  "impact": "serious",
  "wcagSc": ["1.4.3"],
  "en301549": "§9.1.4.3",
  "help": "Elements must meet minimum color contrast ratio thresholds",
  "status": "pending"
}

Step 4: Guided Fixes

The agent loads the relevant fix guide, locates the source file using the CSS selector and HTML snippet from the scan, applies the code change, and marks the item as done. It moves through the plan in priority order — critical first, minor last.

Step 5: Verification

After fixes are applied, the agent re-scans the page to confirm violations are resolved. The plan file records a verification timestamp for audit documentation.

The full cycle — scan, plan, fix, verify — takes minutes. Not days.

The EN 301 549 Mapping (and Why It Matters)

Most accessibility testing tools report WCAG success criteria. That's useful, but incomplete for European compliance.

BFSG references EN 301 549 — the harmonized European standard for ICT accessibility. When a compliance officer or market surveillance authority (Marktüberwachungsbehörde) asks which requirements you've addressed, they speak in EN 301 549 clause numbers, not WCAG success criteria.

navable maps every finding to both. When the scan reports a contrast violation, it tells you it's WCAG 1.4.3 and EN 301 549 §9.1.4.3. Your compliance documentation builds itself as a byproduct of development.

Setup in Under 2 Minutes

For VS Code (GitHub Copilot)

Create .vscode/mcp.json:

{
  "servers": {
    "navable": {
      "command": "npx",
      "args": ["-y", "@navable/mcp"]
    }
  }
}

For Cursor

Create .cursor/mcp.json:

{
  "mcpServers": {
    "navable": {
      "command": "npx",
      "args": ["-y", "@navable/mcp"]
    }
  }
}

For Claude Code

claude mcp add navable -- npx -y @navable/mcp

That's it. The package installs via npx. Chromium downloads automatically on first scan (~150 MB one-time). No API keys, no accounts, no configuration beyond this.

To add agent skills, copy the skills folder into your project:

# VS Code
cp -r skills/* .github/skills/

# Cursor
cp -r skills/* .agents/skills/

# Claude Code
cp -r skills/* .claude/skills/

What Gets Tested

The scanner checks 50 WCAG 2.1 success criteria (Level A + AA) across 10 categories:

CategoryExample Rules
Images & mediaimage-alt, svg-img-alt, video-caption
Forms & labelslabel, select-name, autocomplete-valid
Color & contrastcolor-contrast, link-in-text-block
Navigation & linkslink-name, bypass, button-name
Headings & structureheading-order, document-title, page-has-heading-one
Landmarks & regionsregion, landmark-one-main
ARIAaria-required-attr, aria-valid-attr-value, aria-roles
Keyboardtabindex, focus management patterns
Languagehtml-has-lang, html-lang-valid
Tablesheader associations, caption usage

The fix guides provide before/after code for 55 documented patterns, so the agent doesn't guess — it applies tested solutions.

Built-In Reference Resources

The MCP server also exposes reference documentation as MCP resources:

  • WCAG ↔ EN 301 549 mapping — full cross-reference table with testability ratings
  • Fix patterns by rule — request specific patterns by axe rule ID (e.g., navable://docs/fix-patterns/color-contrast,label)
  • ARIA patterns — 25 WAI-ARIA APG widget patterns with keyboard requirements
  • Semantic HTML reference — HTML elements mapped to implicit ARIA roles

These load on demand and keep the agent's context focused on what's relevant to the current fix.

What It Doesn't Replace

Automated scanning catches an estimated 30–40% of accessibility barriers. It's effective for structural issues, missing attributes, contrast ratios, and ARIA misuse. It cannot assess:

  • Keyboard navigation flow quality
  • Screen reader announcement order and clarity
  • Cognitive load and content readability
  • Whether alt text is actually meaningful
  • Complex interaction patterns in context

navable handles the automatable portion so your team can focus manual testing time on the things only humans can evaluate. It's the accessibility equivalent of a linter — it catches the mechanical issues fast, but doesn't replace expert review.

Privacy and Security

Everything runs locally. The MCP server starts a local Chromium instance that connects to your localhost dev server. No data is sent to external services. No telemetry. No analytics. The codebase is open source and MIT-licensed — inspect it yourself.

Who This Is For

  • Developers who want accessibility feedback in their editor while they build, not in a PDF weeks later
  • Team leads looking to make accessibility a continuous part of development instead of a periodic audit
  • Compliance teams who need EN 301 549 mapping for BFSG documentation
  • Agencies delivering BFSG-compliant websites for EU clients

Get Involved

This is version 0.1 — actively developed and open to feedback. If you run into issues, have feature requests, or want to contribute fix patterns:

  • Open an issue on GitHub
  • Star the repos if you find them useful
  • PRs welcome — especially for additional fix patterns and ARIA guides

Building accessible products shouldn't require specialized tooling budgets. With these tools, it's part of the workflow.


Links:

Stay Updated

Get accessibility tips and updates.